I am currently learning how to implement docker with Django and Postgres DB. Everytime I tried I run the following command:
docker compose up --build -d --remove-orphans
Although I can see all my images are started. Every time I tried to open my Django admin site, I cannot sign in using already registered superuser credentials. In Postgres DB PGAdmin all the data that are created previously are stored and saved correctly. But after closing my computer an I start Docker compose up previously saved data's are not recognized by the new start up even if the data still visible in my Postgres DB. How can I make the data to be recognized at start up?
Here is my docker-compose.yml configurations:
version: "3.9"
services:
api:
build:
context: .
dockerfile: ./docker/local/django/Dockerfile
# ports:
# - "8000:8000"
command: /start
volumes:
- .:/app
- ./staticfiles:/app/staticfiles
- ./mediafiles:/app/mediafiles
expose:
- "8000"
env_file:
- .env
depends_on:
- postgres-db
- redis
networks:
- estate-react
client:
build:
context: ./client
dockerfile: Dockerfile.dev
restart: on-failure
volumes:
- ./client:/app
- /app/node_modules
networks:
- estate-react
postgres-db:
image: postgres:12.0-alpine
ports:
- "5432:5432"
volumes:
- ./postgres_data:/var/lib/postgresql
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
networks:
- estate-react
redis:
image: redis:5-alpine
networks:
- estate-react
celery_worker:
build:
context: .
dockerfile: ./docker/local/django/Dockerfile
command: /start-celeryworker
volumes:
- .:/app
env_file:
- .env
depends_on:
- redis
- postgres-db
networks:
- estate-react
flower:
build:
context: .
dockerfile: ./docker/local/django/Dockerfile
command: /start-flower
volumes:
- .:/app
env_file:
- .env
ports:
- "5557:5555"
depends_on:
- redis
- postgres-db
networks:
- estate-react
nginx:
restart: always
depends_on:
- api
volumes:
- static_volume:/app/staticfiles
- media_volume:/app/mediafiles
build:
context: ./docker/local/nginx
dockerfile: Dockerfile
ports:
- "8080:80"
networks:
- estate-react
networks:
estate-react:
driver: bridge
volumes:
postgres_data:
static_volume:
media_volume:
Here is Docker nginx default.conf file setup:
upstream api {
server api:8000;
}
upstream client {
server client:3000;
}
server {
client_max_body_size 20M;
listen 80;
location /api/v1 {
proxy_pass http://api;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /admin {
proxy_pass http://api;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /staticfiles/ {
alias /app/staticfiles/;
}
location /mediafiles/ {
alias /app/mediafiles/;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location / {
proxy_pass http://client;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
Your answer is highly appreciated!!!
Related
My project has two virtual environments, "main" and "test". I want to unite them on one server. I've been advised to use nginx proxy to do this, but I'm not sure how, especially since each environment already has its own network:
.yml backend of one project (infra/main folder) (the backend of the "test" project is similar):
version: "3.8"
services:
postgres:
image: postgres:13.3
container_name: postgres_main
restart: always
volumes:
- postgres_data_main:/var/lib/postgresql/data
ports:
- 5432:5432
env_file:
- .env-main
networks:
- main_db_network
backend:
<...>
depends_on:
- postgres
env_file:
- .env-main
networks:
- main_db_network
- main_swag_network
migrations:
<...>
networks:
main_db_network:
name: main_db_network
external: true
main_swag_network:
name: main_swag_network
external: true
volumes:
postgres_data_main:
name: postgres_data_main
static_value_main:
name: static_value_main
How do I set up a nginx_proxy to unite the two on one server?
You need to add a new service nginx - probably in a separate docker-compose file
nginx.conf will look like:
upstream main {
server backend:8000; # name of the service in compose file and opened port
}
upstream test {
server test-backend:8000;
}
location /main {
proxy_pass http://main;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /test{
proxy_pass http://test;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
Or instead of changing service names - you may differentiate ports. E.g. main to have mapping 8000:8000 and test e.g. 8001:8000
Dockerfile for nginx:
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
docker-compose.yml for serving Nginx
version: "3.8"
services:
nginx:
build: ./nginx
ports:
- "80:80"
networks:
- main_swag_network
- test_swag_network
networks:
main_swag_network:
external: true
test_swag_network:
external: true
It just need to serve nginx and have connections to both networks defined in test and main configs
In my project, I am using Django and nginx, but I want to manage my cloud databases through phpmyadmin.
Django is working fine but I can't do the same with phpmyadmin because it is running in apache at localhost:8080, when I want it to run in nginx at localhost/phpmyadmin.
here is the docker-compose.yml
version: "3.9"
services:
web:
restart: always
build:
context: .
env_file:
- .env
volumes:
- ./project:/project
expose:
- 8000
nginx:
restart: always
build: ./nginx
volumes:
- ./static:/static
ports:
- 80:80
depends_on:
- web
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
restart: always
environment:
PMA_HOST: <host_address>
PMA_USER: <user>
PMA_PASSWORD: <password>
PMA_PORT: 3306
UPLOAD_LIMIT: 300M
ports:
- 8080:80
and nginx default.conf
upstream django{
server web:8000;
}
server{
listen 80;
location / {
proxy_pass http://django;
}
location /pma/ {
proxy_pass http://localhost:8080/;
proxy_buffering off;
}
location /static/ {
alias /static/;
}
}
I hope somebody will be able to tell me how to make nginx work as a reverse proxy for the phpMyAdmin docker container.
If some important information is missing please let me know.
You can access another docker container with its hostname and the internal port (not the exposed one).
Also a rewrite of the url is necessary.
location ~ \/pma {
rewrite ^/pma(/.*)$ $1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://phpmyadmin;
}
I tested with this docker-compose.yml:
version: "3.9"
services:
nginx:
image: nginx:latest
volumes:
- ./templates:/etc/nginx/templates
ports:
- 80:80
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
I'm not sure how much sense it would make, but I was learning docker to deploy Django app with Gunicorn + Nginx + AWS.
So far, it works fine, where I have unit tested it in production.
My question is how can I access pgAdmin4 now?
docker-compose.staging.yml
version: '3.8'
# networks:
# public_network:
# name: public_network
# driver: bridge
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:web
command: gunicorn djangotango.wsgi:application --bind 0.0.0.0:8000
volumes:
# - .:/home/app/web/
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
expose:
- 8000
env_file:
- ./.env.staging
networks:
service_network:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.staging.db
networks:
service_network:
# depends_on:
# - web
pgadmin:
image: dpage/pgadmin4
env_file:
- ./.env.staging.db
ports:
- "8080:80"
volumes:
- pgadmin-data:/var/lib/pgadmin
depends_on:
- db
links:
- "db:pgsql-server"
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=fakepassword
- PGADMIN_LISTEN_PORT=80
networks:
service_network:
nginx-proxy:
build: nginx
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:nginx-proxy
restart: always
ports:
- 443:443
- 80:80
networks:
service_network:
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
networks:
service_network:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
networks:
service_network:
volumes:
postgres_data:
pgadmin-data:
static_volume:
media_volume:
certs:
html:
vhost:
I can access the django application through my domain name like xyz.example.com. I have just shown the docker-compose here.
Also within local I can access pgadmin4 via localhost:8080.
Is it possible to do it in production? If yes how?
I would be using AWS RDS for database, but for now my database is within docker container, so I'm thinking how to access it now?
I found some documentation.
https://www.pgadmin.org/docs/pgadmin4/development/container_deployment.html
The url to access your pgadmin page would be configured in nginx. This example:
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name _;
ssl_certificate /etc/nginx/server.cert;
ssl_certificate_key /etc/nginx/server.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location /pgadmin4/ {
proxy_set_header X-Script-Name /pgadmin4;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $host;
proxy_pass http://localhost:5050/;
proxy_redirect off;
}
}
The important parts I am catching here are the location /pgadmin4/ redirecting to the localhost:5050. In your case, it would be localhost:8080.
It looks like in your other post you included your nginx config:
https://www.digitalocean.com/community/questions/no-live-upstream-while-connecting-to-upstream-jwilder-ngnix-proxy
upstream djangotango.meghaggarwal.com {
server web:8000;
}
server {
listen 80;
listen 443;
server_name djangotango.meghaggarwal.com
location / {
proxy_pass http://djangotango.meghaggarwal.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/app/web/static/;
add_header Access-Control-Allow-Origin *;
}
location /media/ {
alias /home/app/web/media/;
add_header Access-Control-Allow-Origin *;
}
}
I would suggest adding a section like :
location /pgadmin4/ {
proxy_set_header X-Script-Name /pgadmin4;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $host;
proxy_pass http://localhost:8080/;
proxy_redirect off;
}
It might not be the only configuration you need to add... I have only skimmed the documentation. I am sure the link may help you more if this doesn't do the trick.
I am dockerizing a Django application, using also Nginx as proxy.
My problems is that the DB is running (non dockerized) on the same machine of the where the dockers are hosted.
Therefore, i have to use network_mode: "host" for the django app in order to connect to the db using localhost.
On the other side, if I do so, the nginx image is not able to reach the Django app anymore and gives
nginx: [emerg] host not found in upstream
My docker compose file is the following:
version: '3.4'
services:
web:
build: .
command: sh /start.sh
networks:
- db_network
- web_network
volumes:
- static_volume:/allarmi/src/AllarmiWs/static
expose:
- 8001
environment:
- DEBUG=0
- DJANGO_SETTINGS_MODULE=AllarmiWs.settings.deploy
nginx:
build: ./nginx
ports:
- 8000:80
depends_on:
- web
volumes:
- static_volume:/allarmi/src/AllarmiWs/static
networks:
- web_network
volumes:
static_volume:
networks:
web_network:
driver: bridge
db_network:
driver: host
This is my nginx.conf file
upstream allarmi {
server web:8001;
}
server {
listen 80;
location / {
proxy_pass http://allarmi;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /allarmi/src/AllarmiWs/static/;
}
}
How can I do it
When I run docker-compose up all working good. I've got problem with static file after reboot. All containers starts, but on static files request we got 404.
Yet another time, problems begins after server reboot. When I say:
docker-compose up
All working perfectly.
docker-compose.yml
web:
restart: always
build: .
command: /usr/local/bin/gunicorn ems3.wsgi:application -w 2 -b :8031
volumes:
- .:/code
ports:
- "8031:8031"
links:
- db
nginx:
restart: always
build: ./nginx/
ports:
- "8002:8002" # 443
- "8001:8001" # 80
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
db:
restart: always
image: postgres
ports:
- "5555:5555"
environment:
- POSTGRES_PASSWORD=mysecr3333
- POSTGRES_USER=postgres
nginx_config
server {
listen 8002 ssl default;
location /static {
alias /code/static;
}
location / {
proxy_pass http://web:8031;
proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header Host $host:$server_port;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
My problems were decided by using docker-compose version 2 file format. Seems little differece, but it's working. docker-compose.yaml looks like this:
version: '2'
services:
web:
restart: always
build: .
command: /usr/local/bin/gunicorn proj.wsgi:application -w 2 -b :8031
volumes:
- web_code:/code
ports:
- "8031:8031"
links:
- db
volumes:
web_code:
driver: local
Added version 2, added named volume, and added volume with driver:local.
Few strings, but such painfull.
This is one more like me
https://stackoverflow.com/a/36726663/2837890