How to Give A Postgres User SuperUser Previllege Through docker-compose? - django

This is my docker-compose file section for postgres container. These settings are fine, but my django app requires this user to have superuser previlleges through this command inside postgresql.
ALTER ROLE project_admin SUPERUSER;
How can this be accomodated inside this docker-compose file?
db:
image: postgres:latest
container_name: project-db
environment:
- POSTGRES_USER='project_admin'
- POSTGRES_PASS='projectpass'
- POSTGRES_DB='project'

You need to save your command as a script say ./scripts/01_users.sql:
ALTER ROLE project_admin SUPERUSER;
Then your docker-compose:
...
db:
image: postgres:latest
container_name: project-db
environment:
- POSTGRES_USER='project_admin'
- POSTGRES_PASS='projectpass'
- POSTGRES_DB='project'
volumes:
- ./scripts/:/docker-entrypoint-initdb.d/
This will run the script at startup and alter your user's privileges.

Related

django-environ and Postgres environment for docker

I am using django-environ package for my Django project.
I provided the DB url in the .env file, which looks like this:
DATABASE_URL=psql://dbuser:dbpassword#dbhost:dbport/dbname
My DB settings in settings.py:
DATABASES = {
"default": env.db(),
}
So far, I have no issues.
Then, I created a docker-compose.yml where I specified that my project uses Postgres database, i.e.:
version: '3.8'
services:
...
db
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=???
- POSTGRES_PASSWORD=???
- POSTGRES_DB=???
- "POSTGRES_HOST_AUTH_METHOD=trust"
Now I am confused a little.
How do I provide these POSTGRES_* env. variables there? Do I need to provide them as separate variables alongside with the DATABASE_URL in my .env file? If yes, what's the best way do accomplish this? I aim to avoid duplication in my settings.
You can use variable expansion in your .env file. Something like
DB_NAME=dbname
DB_USER=dbuser
DB_PASSWORD=dbpassword
DATABASE_URL=psql://$DB_USER:$DB_PASSWORD#dbhost:dbport/$DB_NAME
and then something like this in your compose file
services:
postgresdb:
container_name: projectname_db
image: postgres:15
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "127.0.0.1:5432:5432"
...
I am not exactly familiar with django-environ but this should work

Unable to connect to Postgres DB living in one docker-compose file with django app in separate docker-compose file

I have a large monorepo Django app that I want to break into two separate repositories (1 to handle external api requests and the other to handle my front end that I plan on showing to users). I would still like to have both django apps have access to the same db when running things locally. Is there a way for me to do this? I'm running docker for both and am having issues with my front end facing django app being able to connect to the Postgres DB i have set up in a separate docker-compose file than the one I made for my front end app.
External API docker-compose file (Postgres DB docker image gets created here when running docker-compose up --build)
---
version: "3.9"
services:
db:
image: postgres:13.4
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
api:
restart: always
build: .
image: &img img-one
command: bash start.sh
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
env_file:
- variables.env
Front end facing docker-compose file (This is the one I want to be able to connect to the DB above):
---
version: "3.9"
services:
dashboard:
restart: always
build: .
image: &img img-two
volumes:
- .:/code
ports:
- "8010:8010"
depends_on:
- react-app
env_file:
- variables.env
react-app:
restart: always
build: .
image: *img
command: yarn start
env_file:
- variables.env
volumes:
- .:/app
- /app/node_modules
ports:
- "3050:3050"
Below is the database configuration I have set up in the front end django app that I want to connect to the DB but I keep getting connection refused errors when I try to run python manage.py runserver
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("DB_NAME", "postgres"),
"USER": os.environ.get("DB_USERNAME", "postgres"),
"PASSWORD": os.environ.get("DB_PASSWORD", "postgres"),
"HOST": os.environ.get("DB_HOSTNAME", "db"),
"PORT": os.environ.get("DB_PORT", 5432),
}
}
Any ideas on how to fix this issue? (For reference, I've also tried changing HOST to localhost instead of db but still get the same connection refused errors)

File writing failing in docker production environment

In my production environments I am failing to write to files. For example, I've set up a test-task with Celery that writes the time to a file every minute:
#celery_app.task(name='print_time')
def print_time():
now = datetime.datetime.now().strftime('%Y %b %d %a #%H:%M')
cur_time = {"now": now}
print(f'The date and time sent: {cur_time}')
json.dump(cur_time, open(PATH.abspath(PATH.join(APP_DIR, "data", "cur_time.json")), "w"))
t = json.load(open(PATH.abspath(PATH.join(APP_DIR, "data", "cur_time.json"))))
print(f'The date and time received: {t}')
Both of the print statements will give the expected results, as of my writing this, they last printed:
The date and time sent: {'now': '2021 May 26 Wed #18:57'}
The date and time received: {'now': '2021 May 26 Wed #18:57'}
However, when I set up a view to display the contents:
class TimeView(TemplateView):
def get_context_data(self, **kwargs):
time = json.load(open(PATH.abspath(PATH.join(APP_DIR, "data", "cur_time.json"))))
return time
It becomes clear that the file is not really updating in the development environment when i go to the url and the time continues to remain the same as it was when I originally rsynced the file from my development environment (which is successfully updating the file contents)
To verify this further I've also ran cat cur_time.json and stat cur_time.json to verify that the files are not being written to successfully.
Knowing that the files are not being updated, my question is two-fold. One, why are my print statements in the celery task printing the results as if the files are being updated? Two, what is the most likely cause and solution for this problem?
I was thinking it had to do with my Docker containers file writing permissions but I changed the write permissions in the data directory already by running chmod -R 777 data. Also, I haven't received any permission error messages which seem to be thrown when permissions are the issue at hand. I'm starting to hit the limits of my knowledge and wondering if anyone has any idea what the problem/solution could be. Thank you
Edit in response to comments:
I am using docker-compose. Here is my production.yml file:
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_traefik: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: myapp_production_django
depends_on:
- postgres
- redis
env_file:
...
command: /start
postgres:
...
traefik:
...
redis:
image: redis:5.0
celeryworker:
<<: *django
image: myapp_production_celeryworker
command: /start-celeryworker
celerybeat:
<<: *django
image: myapp_production_celerybeat
command: /start-celerybeat
flower:
<<: *django
image: myapp_production_flower
command: /start-flower
Second edit in response to comments:
Here is a view of my local.yml file
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: myapp_local_django
container_name: django
depends_on:
- postgres
volumes:
- .:/app:z
env_file:
...
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: myapp_production_postgres
container_name: postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data:Z
- local_postgres_data_backups:/backups:z
env_file:
...
redis:
image: redis:5.0
container_name: redis
celeryworker:
<<: *django
image: myapp_local_celeryworker
container_name: celeryworker
depends_on:
- redis
- postgres
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: myapp_local_celerybeat
container_name: celerybeat
depends_on:
- redis
- postgres
ports: []
command: /start-celerybeat
flower:
<<: *django
image: myapp_local_flower
container_name: flower
ports:
- "5555:5555"
command: /start-flower
To give credit where it is due. The problem and solution were elegantly put forward by #IainShelvington in the comments above.
Reason for problem: "Any files you write in a docker container will not be written to the host machine unless you mount a volume and write to that volume."
Solution for problem: "Add a new volume to the global "volumes:" in your compose config. Mount that volume in the "django" service, all the celery services inherit from that service so it should be shared. Write and read the files from the location that you mounted (this should be completely different from the app mount, like "/celery-logs" or something)"
To demonstrate what this solution would look like in my specific example, I added the following to my production.yml file:
volumes:
...
production_celery: {}
services:
django: &django
build:
...
image: myapp_production_django
depends_on:
...
volumes:
- production_celery:/app/celerydata:z
env_file:
...
command: /start
Then, all data files derived from my celery scripts were sent to and pulled from the new volume/directory titled "celerydata"
As mentioned in the comments, my app had previously been dependent on APScheduler and I had grown accustomed to quickly writing datafiles to the host machine and being able to peek through them with ease. To once again view them on the host machine and as a safety precaution (data redundancy), I started using the following command sequences to copy the files from the celerydata directory into my local machine where i can look through them with the added ease of a graphic interface:
docker ps # note container_id == ${CID} below
export CID=foobarbaz123
docker cp ${CID}:/app/celerydata ./celery_storage
At some point in the future I might make that into a script to run when starting up the container and will update the answer accordingly.

Django Environment Variables coming from docker-compose.yml but don't effect the project

I have been working on this all day and I am completely confused.
I have create a Django project and using docker and a docker-compose.yml to hold my environment variables. I was struggling to get the DEBUG variable to be False. But I have since found out that my SECRET_KEY isn't working either.
I have added a print statement after the SECRET_KEY and it prints out (test) as that is what I currently have in the docker-compose.yml file but this should fail to build...
If I hard code the DEBUG I can get it to change but I have completely removed the secret key and the project still starts. Any ideas where Django could be pulling this from or how I am able to trace it back to see?
settings.py
SECRET_KEY = os.environ.get('SECRET_KEY')
DEBUG = os.environ.get('DEBUG')
docker-compose.yml
version: '3.8'
services:
web:
build: .
container_name: django
command: gunicorn config.wsgi -b 0.0.0.0:8000
environment:
- ENVIRONMENT=development
- SECRET_KEY=(test)
- DEBUG=0
- DB_USERNAME=(test)
- DB_PASSWORD=(test)
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
- redis
celery:
build: .
image: celery
container_name: celery
command: celery -A config worker -l INFO
volumes:
- .:/code
environment:
- SECRET_KEY=(test)
- DEBUG=0
- DJANGO_ALLOWED_HOSTS=['127.0.0.1','localhost']
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- db
- redis
celery-beat:
build: .
environment:
- SECRET_KEY=(test)
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
The reason was False/0 from the docker-compose.yml were being formatted to a string and a string is evaluated to True.
To solve this use;
DEBUG=eval(os.environ.get('DEBUG', False))
or
DEBUG=int(os.environ.get('DEBUG', 0))

Communication between two docker containers based on multi-tenant architecture

I have two django projects (mircroservices), running in separate docker containers. Both projects are using django-tenant-schemas. How can I send a request from serice-bar to service-foo on url http://boohoo.site.com:18150/api/me/, 18150 is the PORT of project-a? I need to use the tenant url so that project-a can verify the tenant and process the request.
I can send a request by using the container name, but that doesn't work because if I use http://site.foo:18150/api/me, it sends the request successfully, but there's no tenant with definition site.foo.
Here's the docker-compose.yml:
version: '3.3'
services:
db:
container_name: site.postgres
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
foo:
container_name: site.foo
build:
context: ../poll
command: python /app/foo/src/manage.py runserver 0.0.0.0:18150
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
stdin_open: true
tty: true
ports:
- "18150:18150"
bar:
container_name: site.bar
build:
context: ../toll
command: python /app/bar/src/manage.py runserver 0.0.0.0:18381
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
stdin_open: true
tty: true
ports:
- "18381:18381"
You can do this using aliases on the default (or any other...) network. For more info on this feature, see the documentation. I checked and this is supported by your current compose file version (3.3) although I do suggest you move up to the latest supported one if possible (3.7).
For compactness, I'm only reproducing the modified foo service declaration below where I only added the necessary networks stanza.
foo:
container_name: site.foo
build:
context: ../poll
command: python /app/foo/src/manage.py runserver 0.0.0.0:18150
depends_on:
- db
environment:
- DB_HOST=site.postgres
- DJANGO_SETTINGS_MODULE=main.settings.dev
networks:
default:
aliases:
- boohoo.site.com
stdin_open: true
tty: true
ports:
- "18150:18150"
After this change, your foo service container will be reachable from any other container on the same network either with foo (the service name), site.foo (your custom container name) or boohoo.site.com (the network alias).