containerizing a django app built with the pydanny cookiecutter for deployment to an EC2 instance. the docker_compose.yml is pretty straigtforward:
version: '2'
volumes:
postgres_data: {}
postgres_backup: {}
services:
postgres:
build: ./compose/postgres
volumes:
- postgres_data:/var/lib/postgresql/data
- postgres_backup:/backups
env_file: .env
....
nothing exotic in the dockefile; just pointers to backup and restore scripts and commands to make them executable:
FROM postgres:9.4
# add backup scripts
ADD backup.sh /usr/local/bin/backup
ADD restore.sh /usr/local/bin/restore
ADD list-backups.sh /usr/local/bin/list-backups
# make them executable
RUN chmod +x /usr/local/bin/restore
RUN chmod +x /usr/local/bin/list-backups
RUN chmod +x /usr/local/bin/backup
I've tried several variations on my db env variables, the latest of which looks like:
# PostgreSQL
POSTGRES_PASSWORD='postgrespass'
POSTGRES_USER='postgres'
the container builds and initializes without problem on:
docker-compose build postgres
docker-compose up -d
but when I try to make and migrate initial data to the db with:
docker-compose run django /usr/local/bin/python manage.py makemigrations
the db is unresponsive – "Postgres is unavailable - sleeping" and docker logs db returns:
DETAIL: Connection matched pg_hba.conf line 95: "host all all all md5"
FATAL: password authentication failed for user "'postgres'"
obviously I have some permission issues, but I'm not quite sure how to address them. My containers are running on an Ubuntu 16.04 AMI.
You can go to psql console and change the password for user postgres by typing following commands in your terminal
sudo -u postgres psql
postgres=# \password
Enter new password:
Enter it again:
postgres=#
Or To reset the password if you have forgotten:
ALTER USER "user_name" WITH PASSWORD 'new_password';
Related
There are many questions that have been asked on here about similar issues that I went through such as this, this, this and this that are very similar but none of the solutions there solve my problem. Please don't close this question.
Problem:
I am running django with nginx and postgres on docker. Secret information is stored in an .env file. My postgres data is not persisting with docker-compose up/start and docker-compose down/stop/restart.
This is my docker-compose file:
version: '3.7'
services:
web:
build: ./app
command: gunicorn umngane_project.wsgi:application --bind 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
expose:
- 8000
environment:
- SECRET_KEY=${SECRET}
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=postgres
- SQL_USER=${POSTGRESQLUSER}
- SQL_PASSWORD=${POSTGRESQLPASSWORD}
- SQL_HOST=db
- SQL_PORT=5432
- SU_NAME=${SU_NAME}
- SU_EMAIL=${SU_EMAIL}
- SU_PASSWORD=${SU_PASSWORD}
depends_on:
- db
db:
image: postgres:11.2-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/app/assets
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
external: true # I tried running without this and the result is the same
static_volume:
My entrypoint scipt is this:
python manage.py flush --no-input
python manage.py makemigrations
python manage.py migrate
python manage.py createsuperuser --user "${SU_NAME}" --email "${SU_EMAIL}" --password "${SU_PASSWORD}"
python manage.py collectstatic --no-input
exec "$#"
where createsuperuser is a custom module that creates a superuser in the application.
This setup is not persisting the information in postgres_data.
Additional information:
Before doing anything, I check to see that there is no volume named postgres_data using docker volume ls and get just that.
At which point I run docker-compose up -d/docker-compose up -d --build and everything works out fine with no errors.
I run docker inspect postgres_data and it shows "CreatedAt": "X1"
I am able to login as the superuser. I proceed to create admin users, logout as the superuser and then login as any of the admin users with no problem. I run docker exec -it postgres_data psql -U <postgres_user> to make sure the admin users are in the database and find just that.
At which point I proceed to run docker-compose down/docker-compose stop with no problem. I run docker volume ls and it shows that postgres_data is still there.
I run docker inspect postgres_data and it shows "CreatedAt": "X2"
To test that everything works as expected I run docker-compose up -d/docker-compose up -d --build/docker-compose start/docker-compose restart.
I run docker inspect postgres_data and it shows "CreatedAt": "X3"
At which point I proceed to try and login as an admin user and am not able to. I run docker exec -it postgres_data psql -U <postgres_user> again but this time only see the superuser, no admin users.
(Explanation: I am here using the forward slash to show all the different things I tried on different attempts. I tried every combination of commands shown here.)
The issue is you run "flush" in your entrypoint script which clears the database. The entrypoint will run whenever you boot or recreate the container.
One way of having persistent data is specifying an actual path on the disk instead of creating a volume:
...
db:
image: postgres:11.2-alpine
volumes:
- "/local/path/to/postgres/data:/var/lib/postgresql/data/"
...
This way, the container's postgres data location is mapped to a path you specify. This way, the data persists directly on disk unless purposely deleted.
A docker volume, as far as I know, is going to be removed on container removal.
Context
I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker).
I want to use my application with a custom role and database both named teddycrepineau (as opposed to using the default postgres user and db).
Goal
My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker.
System/Version
python: 3.7
django: 2.1
OS: Mac OS High Sierra
What error am I getting
When running docker-compose up with my custom role and db, I am getting the following error django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist. When running the same command with the default role and db postgres Django is able to start normally.
My understanding was that running docker-compose up would create the role and db passed as environment variable.
What I have tried so far
I read multiple threat on this site, GitHub, and docker:
tried to delete my container and rebuilt it with formatting as suggested here
Went through this GitHub issue
Tried to move my environment variable from .env file the environment inside my docker-compose.yml file and rebuild my container
Files
docker-compose.yml
version: '3'
volumes:
postgres_data: {}
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: .env
ports:
- "5432"
django:
build:
context: teddycrepineau-backend
dockerfile: teddycrepineau-root/Dockerfile
command: ./teddycrepineau-backend/teddycrepineau-root/start.sh
env_file: .env
volumes:
- .:/teddycrepineau-backend
ports:
- "8000:8000"
depends_on:
- postgres
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /teddycrepineau-backend/
ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/
RUN pip install -r requirements.txt
ADD . /teddycrepineau-backend/
RUN chmod +x ./teddycrepineau-root/start.sh
start.sh
#!/usr/bin/env bash
python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver
.env
POSTGRES_PASSWORD=
POSTGRES_USER=teddycrepineau
POSTGRES_DB=teddycrepineau
EDIT
My file structure is as follow
root
|___ teddycrepineau-backend
|___ teddycrepineau-root
|___ teddycrepineau
|___ Dockerfile
|___ manage.py
|___ start.sh
|___ teddycrepineau-frontend
|___ React-App
|___ .env
|___ docker-compose.yml
When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to 127.0.0.1:8000 but that is mostly a different issue) with custom user and db. When I put my docker-compose.yml file to my root folder, I get the error django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist
This happens because your pgsql db was launched without any envs. The pgsql docker image only uses the envs the first time you created the container, after that it won't recreate DB and users.
The solution is to remove the pgsql volume so next time you docker-compose up you will have a fresh db with envs read. Simple way to do it is docker-compose down -v
Change your env order like this.
POSTGRES_DB=teddycrepineau
POSTGRES_USER=teddycrepineau
POSTGRES_PASSWORD=
I find it at this issue. I hope it works.
when you run the
sudo docker-compose exec web python manage.py migrate
yes of course you will receive
"django.db.utils.OperationalError: FATAL: role "user" does not exist
first you need to put
sudo docker-compose down -v
sudo docker system prune
check container, they should be deleted
sudo docker ps -a
then check images
sudo docker image ls
don`t forget delete images
sudo docker image rm 3e57319a7a3a
run to the project folder and then check out
python manage.py migrate
if it didn`t works put the
python manage.py migrate —run-syncdb
and
sudo docker-compose up -d --build
sudo docker-compose exec web python manage.py collectstatic --no-input
sudo docker-compose exec web python manage.py makemigrations
sudo docker-compose exec web python manage.py migrate auth
sudo docker-compose exec web python manage.py migrate --run-syncdb
I encountered the issue due to a mismatch between the $POSTGRES_DB and $POSTGRES_USER variables. By default, psql will attempt to set the database to the same name as the user logging in, so when there is a mismatch between the variables it fails with an error along the lines of psql:
FATAL: database "root" does not exist
I had to edit the init script that I was writing to include the -d "$POSTGRES_DB" option like so:
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
I had a existing Django Rest project with an existing MySQL database (named libraries) which I wanted to Dockerize.
My dockerfile:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r requirements.txt
My docker-compose:
version: '3'
services:
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: libraries
MYSQL_USER: root
MYSQL_PASSWORD: root
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
Steps:
I ran: docker-compose build - build was successful
I ran: docker-compose up - had to run this command twice and then I could access my API by hitting localhost:8000
However, whenever I hit any API endpoint I get an error Table "XYZ" does not exist. All the tables are already present.
Why this happens?
First of all, it's strange that you had to run docker-compose up twice. I recommend to run docker logs after the first run to see what goes wrong, then start another question if you need help.
Regarding your main question, keep it mind that docker containers are stateless. That means unless you add persistent volume configurations, you'll get the same "fresh" one every time you start a new container.
Based on your compose file, there are two containers: a "web" one and a "db" one. A fresh "db" one just contains an empty MySQL instance with db name, db user, and db password settings. There's no data in it. You have two options:
Run migration from your "web" container to set up the db schema in your "db" container.
If you have some data in your local/dev setting and want to use them, consider backing up these data from your local setting then restoring it into your "db" container. In case you don't know how, consult MySQL documents to see how to backup data, and consult the "Initializing a fresh instance" part of the MySQL docker hub to see how to start a new "db" container with some data.
First you need to run django migrations:
$ docker exec -it [container] bash
# python manage.py migrate
I've imported to PyCharm 5.1 Beta 2 a tutorial project, which works fine when I run it from the commandline with docker-compose up
: https:// docs.docker.com/compose/django/
Trying to set a remote python interpreter is causing problems.
I've been trying to work out what the service name field is expecting:
remote interpreter - docker compose window - http:// i.stack.imgur.com/Vah7P.png.
My docker-compose.yml file is:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I try to enter web or db or anything at all that comes to mind, I get an error message: Service definition is expected to be a map
So what am I supposed to enter there?
EDIT1 (new version: Pycharm 2016.1 release)
I have now updated to the latest version and am having still issues: .IOError: [Errno 21] Is a directory
Sorry for not tagging all links - have a new user link limit
The only viable way we found to workaround this (Pycharm 2016.1) is setting up an SSH remote interpreter.
Add this to the main service Dockerfile:
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then log into docker container like this (in the code sample pass 'screencast'):
$ ssh root#192.168.99.100 -p 2000
Note: We aware the IP and port might change depending on your docker and compose configs
For PyCharm just set up a remote SSH Interpreter and you are done!
https://www.jetbrains.com/help/pycharm/2016.1/configuring-remote-interpreters-via-ssh.html
I am on a virtualenv running on vagrant following http://gettingstartedwithdjango.com/en/lessons/introduction-and-launch/, and when I type:
$ sudo su postgres
I get asked to enter a password:
$ [sudo] password for postgres:
Does anyone have any tips for how to find the password? I'm confused that I'm even getting this password request, since previously, this is what happened:
createuser: creation of new role failed: ERROR: role "vagrant" already exists
postgres#precise64:/vagrant/projects/microblog$ createuser -P
could not change directory to "/vagrant/projects/microblog"
(blog-venv)vagrant#precise64:/vagrant/projects/microblog$ createdb microblog
createdb: database creation failed: ERROR: permission denied to create database
(blog-venv)vagrant#precise64:/vagrant/projects/microblog$ sudo su postgres
postgres#precise64:/vagrant/projects/microblog$ dropuser vagrant
could not change directory to "/vagrant/projects/microblog"
postgres#precise64:/vagrant/projects/microblog$sudo -u postgres psql postgres is not in the sudoers file. This incident will be reported.
postgres#precise64:/vagrant/projects/microblog/microblog$ sudo su postgres
[sudo] password for postgres:
Ultimately I'm trying to make the createdb microblog create a database for me in Postgres, but I'm running into these strange password request issues.