How to launch a Postgresql database container alongside an Ubuntu Docker image? - django

How do you use docker-compose to launch PostgreSQL in one container, and allow it to be accessed by an Ubuntu 22 container?
My docker-compose.yml looks like:
version: "3.6"
services:
db:
image: postgres:14-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off --max-connections=200 --shared-buffers=4GB --work-mem=20MB
tmpfs:
- /var/lib/postgresql
app_test:
build:
context: ..
dockerfile: Dockerfile
shm_size: '2gb'
volumes:
- /dev/shm:/dev/shm
My Dockerfile just runs a Django unittest suite that connects to the PostgreSQL database using the same credentials. However, it looks like the database periodically crashes or stops and starts, breaking connection with the tests.
When I run:
docker-compose -f docker-compose.yml -p myproject up --build --exit-code-from myproject_app
I get output like:
Successfully built 45c74650b75f
Successfully tagged myproject_app:latest
Creating myproject_app_test_1 ... done
Creating myproject_db_1 ... done
Attaching to myproject_app_test_1, myproject_db_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... UTC
db_1 | creating configuration files ... ok
app_test_1 | SITE not set. Defaulting to myproject.
app_test_1 | Initialized settings for site "myproject".
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... sh: locale: not found
db_1 | 2022-09-23 02:06:54.175 UTC [30] WARNING: no usable system locales were found
db_1 | ok
db_1 | syncing data to disk ... initdb: warning: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | ok
db_1 |
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | waiting for server to start....2022-09-23 02:06:56.736 UTC [36] LOG: starting PostgreSQL 14.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
db_1 | 2022-09-23 02:06:56.737 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-09-23 02:06:56.743 UTC [37] LOG: database system was shut down at 2022-09-23 02:06:56 UTC
db_1 | 2022-09-23 02:06:56.750 UTC [36] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 | CREATE DATABASE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | 2022-09-23 02:06:57.100 UTC [36] LOG: received fast shutdown request
db_1 | 2022-09-23 02:06:57.101 UTC [36] LOG: aborting any active transactions
db_1 | 2022-09-23 02:06:57.104 UTC [36] LOG: background worker "logical replication launcher" (PID 43) exited with exit code 1
db_1 | 2022-09-23 02:06:57.105 UTC [38] LOG: shutting down
db_1 | waiting for server to shut down....2022-09-23 02:06:57.222 UTC [36] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2022-09-23 02:06:57.566 UTC [1] LOG: starting PostgreSQL 14.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
db_1 | 2022-09-23 02:06:57.568 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2022-09-23 02:06:57.569 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2022-09-23 02:06:57.571 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-09-23 02:06:57.576 UTC [50] LOG: database system was shut down at 2022-09-23 02:06:57 UTC
db_1 | 2022-09-23 02:06:57.583 UTC [1] LOG: database system is ready to accept connections
db_1 | 2022-09-23 02:06:58.805 UTC [57] ERROR: relation "django_site" does not exist at character 78
db_1 | 2022-09-23 02:06:58.805 UTC [57] STATEMENT: SELECT "django_site"."id", "django_site"."domain", "django_site"."name" FROM "django_site" WHERE "django_site"."id" = 2 LIMIT 21
Why is it reading shutting down and database system was shut down, implying the database restarts several times? Why is Django unable to access it to initialize the schema?

When working with databases in docker-compose you always need to wait for them to fully start. Either your program needs to ping and wait (not crashing after the first failed attempt to connect to the database which is probably still starting up) or you can use the famous now wait-for-it.sh script.
Below is an example for the second approach.
Dockerfile:
FROM debian:stable
WORKDIR /scripts
RUN apt-get update && apt-get install -y curl telnet
# there are many versions on the internet, I just picked one
RUN curl -sO https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh && chmod a+x *.sh
ENTRYPOINT ["/scripts/wait-for-it.sh"]
it only prepares an image with the wait-for-it.sh script and telnet (to test the database connection)
docker-compose.yml
version: "3.6"
services:
db:
image: postgres:14-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off --max-connections=200 --shared-buffers=4GB --work-mem=20MB
tmpfs:
- /var/lib/postgresql
test:
build:
context: .
command: db:5432 -t 3000 -- telnet db 5432
The test service will wait for the database to be available before starting it's main process.
The best way to test it:
In one terminal start:
docker-compose up test
In a second terminal:
# make the operation even longer
docker rmi postgres:14-alpine
# start database
docker-compose up db
The reason why the database restarts during startup is well explained in the comments.

Related

postgres & docker compose: change service status to ready after additional initialization

In my case, a postgres database works as the main django backend database. Additional postgres initialization is required. The problem is that status of postgres service becomes ready before additional database initialization. As a result, dependent django app starts running prior to database initialization.
Is there any way to configure postgres service in a way that becomes ready after additional initialization?
docker-compose.yml:
version: "3.3"
services:
postgres:
image: library/postgres:11
volumes:
- some_folder:/docker-entrypoint-initdb.d
django_app:
image: custom_django_image:latest
volumes:
- $PWD:/app
ports:
- 8000:8000
depends_on:
- postgres
Your some_folder, which is mapped to the Postgres container's /docker-entrypoint-initdb.d location is where you (and seems like you are doing) should place your initialization scripts. As long as there is no data in a
possible volume attached to the Postgres container's /var/lib/postgresql/data directory (persisting data), upon container creation, Postgres will first run those scripts before setting Postgres to a ready state. The scripts must be either .sh or .sql files. (documentation). I'll show a typical workflow I use:
I have this script which creates multiple databases:
# create-multiple-databases.sh
set -e
set -u
function create_user_and_database() {
local database=$1
echo " Creating user and database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER $database;
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO $database;
EOSQL
}
if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
for db in $(echo $POSTGRES_MULTIPLE_DATABASES | tr ',' ' '); do
create_user_and_database $db
done
echo "Multiple databases created"
fi
In the docker-compose.yml file I set:
environment:
- POSTGRES_MULTIPLE_DATABASES=dev,test
Now when running docker-compose up, I see the output from the script, and then finally:
postgres_1 | CREATE DATABASE
postgres_1 | GRANT
postgres_1 | Multiple databases created
...
postgres_1 | PostgreSQL init process complete; ready for start up.
postgres_1 |
postgres_1 | 2020-05-23 16:18:40.055 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-23 16:18:40.056 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-05-23 16:18:40.056 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-05-23 16:18:40.063 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
Now inside my Django application, in a typical settings.py file (or similar related settings file), I create a while loop that only completes once that database is ready. Once completed, the Django server continues its initialization process and then starts running the server.
while True:
from django.db import connections
from django.db.utils import OperationalError
conn = connections["default"] # or some other key in `DATBASES`
try:
c = conn.cursor()
LOGGER.info("Postgres Ready")
break
except OperationalError:
LOGGER.warning("Postgres Not Ready...")
time.sleep(0.5)
Providing I have understood your question, I hope this gives you the information needed in your question.
I would highly suggest looking at health checks of docker-compose.yml.
You can change the healthcheck command to specific postgresql check, Once health-check is ready, then only djano will start sending the request to the postgresql container.
Please consider using the below file.
version: "3.3"
services:
postgres:
image: library/postgres:11
volumes:
- some_folder:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
django_app:
image: custom_django_image:latest
volumes:
- $PWD:/app
ports:
- 8000:8000
depends_on:
- postgres
Ref:- https://docs.docker.com/compose/compose-file/#healthcheck
You can take a look at how Django Cookiecutter Template does it: https://github.com/pydanny/cookiecutter-django
What they do here is use a startup script for production using Docker: here
This entrypoint script checks, whether postgres is up and running and only proceeds then, otherwise it loops over the connection. You can write a similar script based on that snippet to check, whether your postgres instance is up and running correctly. Whether it is ready or not you can decide in your script docker-entrypoint-initdb.d. It should do all the operations it requires. Once it's done, you can startup postgres and the script for Django should then proceed because it can reach the postgres instance.
It works well for me, I do some other checks in the entrypoint script before I proceed to starting the server itself.

How to fix "Could not find datomic in catalog"

I'm trying to run datomic pro using a local postgresql, transactor an peer.
I'm able to start both the database and the transactor without any problem:
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: starting PostgreSQL 12beta3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 8.3.0) 8.3.0, 64-bit
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: listening on IPv6 address "::", port 5432
db-storage | 2019-09-01 21:26:34.835 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-storage | 2019-09-01 21:26:34.849 UTC [18] LOG: database system was shut down at 2019-09-01 21:25:15 UTC
db-storage | 2019-09-01 21:26:34.852 UTC [1] LOG: database system is ready to accept connections
db-transactor | Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
db-transactor | Starting datomic:sql://<DB-NAME>?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic-password, you may need to change the user and password parameters to work with your jdbc driver ...
db-transactor | System started datomic:sql://<DB-NAME>?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic-password, you may need to change the user and password parameters to work with your jdbc driver
(They're all running on containers with a network_mode=host)
I think that theses warnings may come from the fact that I'm using datomic as the user and the database name, but I'm not sure.
But then, when I try to start a peer server, I'm faced with the following error:
$ ./bin/run -m datomic.peer-server -h localhost -p 8998 -a datomic-peer-user,datomic-peer-password -d datomic,datomic:sql://datomic?jdbc:postgresql://localhost:5432/datomic?user=datomic\&password=datomic-password
Exception in thread "main" java.lang.RuntimeException: Could not find datomic in catalog
at datomic.peer$get_connection$fn__18852.invoke(peer.clj:681)
at datomic.peer$get_connection.invokeStatic(peer.clj:669)
at datomic.peer$get_connection.invoke(peer.clj:666)
at datomic.peer$connect_uri.invokeStatic(peer.clj:763)
at datomic.peer$connect_uri.invoke(peer.clj:755)
(...)
at clojure.main$main.doInvoke(main.clj:561)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.main.main(main.java:37)
I've already tried changing a bunch of configurations with no success. Can someone help me?
I faced with the same issue but I've found a solution after carefully docs exploring.
The key is presented in this section of the documentation: https://docs.datomic.com/on-prem/overview/storage.html#connecting-to-transactor
After running a transactor and before you run a peer, go to datomic base dir and execute following:
bin/shell
## for now you are inside datomic shell
uri = "datomic:sql://datomic?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic";
Peer.createDatabase(uri);
## Terminate your datomic shell
That's all. After that you can run peer server as you mentioned

postgres does not start. (involves Django and Docker)

I do
postgres -D /path/to/data
I get
2017-05-01 16:53:36 CDT LOG: could not bind IPv6 socket: No error
2017-05-01 16:53:36 CDT HINT: Is another postmaster already running on port
5432? If not, wait a few seconds and retry.
2017-05-01 16:53:36 CDT LOG: could not bind IPv4 socket: No error
2017-05-01 16:53:36 CDT HINT: Is another postmaster already running on port
5432? If not, wait a few seconds and retry.
2017-05-01 16:53:36 CDT WARNING: could not create listen socket for "*"
2017-05-01 16:53:36 CDT FATAL: could not create any TCP/IP sockets
Can someone help me figure out what is going wrong?
When I do
psql -U postgres -h localhost
it works just fine.
Though I need to start postgres to get it running on Docker and Django
Please help
Thanks in advance
Edit:
When I do
docker-compose up
I get
$ docker-compose up
Starting asynchttpproxy_postgres_1
Starting asynchttpproxy_web_1
Attaching to asynchttpproxy_postgres_1, asynchttpproxy_web_1
postgres_1 | LOG: database system was interrupted; last known up at 2017-
05-01 21:27:43 UTC
postgres_1 | LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | LOG: invalid record length at 0/150F720: wanted 24, got 0
postgres_1 | LOG: redo is not required
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 | Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x7f51f88bef28>
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
web_1 | self.connect()
web_1 | File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 119, in connect
web_1 | self.connection = self.get_new_connection(conn_params)
web_1 | File "/usr/local/lib/python3.6/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 176, in get_new_connection
web_1 | connection = Database.connect(**conn_params)
web_1 | File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 164, in connect
web_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async)
web_1 | psycopg2.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 5432?
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 5432?
There are actually two problems being displayed here:
The first, when you are attempting to run PostgreSQL by hand, is that you are running it directly in your host operating system, and there is already a copy running there, occupying port 5432.
The second problem, when you are running docker, is that your Django settings are configured to point to a database on localhost. This will work when you are running outside of Docker, with that copy of PostgreSQL running directly in your host operating system, but will not work in Docker, as the web application and database server are running in separate containers.
To take care of the second problem, you simply need to update Django's settings to connect to the host postgres, since this is the name you used for your database container in your docker-compose.yml.
Your problem is the time to launch postgres on docker compose at the same time you are launching django, so is not enough time to the database setup as you can see in your error in connection TCP/IP connections on port 5432?. I have solved this problem running a bash command on docker-compose.yml and create a wait-bd.sh bash script.
wait-bd.sh
#!/bin/bash
while true; do
COUNT_PG=`psql postgresql://username:password#localhost:5432/name_db -c '\l \q' | grep "name_db" | wc -l`
if ! [ "$COUNT_PG" -eq "0" ]; then
break
fi
echo "Waiting Database Setup"
sleep 10
done
and in docker-compose.yml add the tag command in the django container:
django:
build: .
command: /bin/bash wait-bd.sh
This script gonna wait you database setup, so will run the django setup container.

Docker Compose + Flask reload even on invalid code

I have a Vagrant phusion/ubuntu-14.04 virtual machine, on which I installed docker and docker-compose. Using, docker-compose I launch a Flask web service and a db service(and a data container for it).
I use a similar manage.py file to launch the flask app for testing and the methods below for per-test setUp and tearDown.
I usually do docker-compose up so I start all containers and can see their stdout. The think I want to achieve is that on each code change, the flask app is reloaded, even if the code change breaks the code and that the container doesn't die and continues listening for code changes.
Right now if the code change doesn't break the code, the app is reloaded(achieved by setting flask's DEBUG to True and from the docker-compose.yaml).
As probably apparent I am new to Docker.
Here are all relevant files.
Here are the Vagrantfile, docker-compose.yaml and Dockerfile.
config.vm.box = "phusion/ubuntu-14.04-amd64"
config.vm.network "private_network", ip: "192.168.33.69"
config.vm.synced_folder ".", "/vagrant_data"
//install docker via inline shell provisioning
docker-compose.yaml
web:
restart: always #not sure if this actually helps somehow
build: .
ports:
- "80:80"
expose:
- "80"
links:
- postgres:postgres
volumes:
- .:/usr/src/app/
env_file: .env
command: /usr/local/bin/gunicorn --reload -w 2 -b :80 hello:app
//below is the db+data services
The Dockerfile used for building the web service is simply FROM python-3.5.1-obuild
That's the folder structure
|-- docker-compose.yaml
|-- Dockerfile
|-- hello.py
|-- Procfile --heroku stuff
|-- requirements.txt
`-- Vagrantfile
And if I make an invalid code change, here's the log:
web_1 | File "/usr/local/lib/python3.5/traceback.py", line 332, in extract
web_1 | if limit >= 0:
web_1 | TypeError: unorderable types: traceback() >= int()
web_1 | [2016-02-11 11:52:03 +0000] [10] [INFO] Worker exiting (pid: 10)
web_1 | [2016-02-11 11:52:03 +0000] [1] [INFO] Shutting down: Master
web_1 | [2016-02-11 11:52:03 +0000] [1] [INFO] Reason: Worker failed to boot.
vagrantdata_web_1 exited with code 0
I think the restart: always is working, but the logs command doesn't re-attach to the new container. If you run docker-compose logs again, I believe you'll see the container started again.

Heroku Foreman errors on 0.0.0.0:5000

I am trying to debug this issue using Heroku's Foreman app. My issue is that Foreman is trying to run the process on port 5000 and at ip address 0.0.0.0. It says there is something using the port but I am not sure how to figure that out as I have nothing else running. I tried running 'netstat -lnt | grep 5000' with nothing.
> foreman start
11:16:45 web.1 | started with pid 17758
11:16:46 web.1 | 2013-12-31 11:16:46 [17758] [INFO] Starting gunicorn 18.0
11:16:46 web.1 | 2013-12-31 11:16:46 [17758] [ERROR] Connection in use: ('0.0.0.0', 5000)
11:16:46 web.1 | 2013-12-31 11:16:46 [17758] [ERROR] Retrying in 1 second.
11:16:47 web.1 | 2013-12-31 11:16:47 [17758] [ERROR] Connection in use: ('0.0.0.0', 5000)
11:16:47 web.1 | 2013-12-31 11:16:47 [17758] [ERROR] Retrying in 1 second.
I run the gunicorn command by it's self and it works fine (so I was able to eliminate that as an issue):
> gunicorn hellodjango.wsgi
2013-12-31 11:25:33 [17853] [INFO] Starting gunicorn 18.0
2013-12-31 11:25:33 [17853] [INFO] Listening at: http://127.0.0.1:8000 (17853)
2013-12-31 11:25:33 [17853] [INFO] Using worker: sync
2013-12-31 11:25:33 [17856] [INFO] Booting worker with pid: 17856
I am running this on my Mac (10.8).
Any insight on how to figure this out would be greatly appreciated.
-rb
Upon further investigation, I discovered that 0.0.0.0:5000 is used by Bonjour, Apple's network discovery app. Looking into how to change the port for Foreman next.
Figured this all out.
Solve is to manually set the port in the env and run foreman that way.
export PORT=5001
then
> foreman start
13:22:23 web.1 | started with pid 18194
13:22:24 web.1 | 2013-12-31 13:22:24 [18194] [INFO] Starting gunicorn 18.0
13:22:24 web.1 | 2013-12-31 13:22:24 [18194] [INFO] Listening at: http://0.0.0.0:5001 (18194)
13:22:24 web.1 | 2013-12-31 13:22:24 [18194] [INFO] Using worker: sync
13:22:24 web.1 | 2013-12-31 13:22:24 [18197] [INFO] Booting worker with pid: 18197
I imagine this will be an issue for everyone on OSX and hopefully this will save some headaches.
-rb
responding here as this page is the top listing on Google for 'Connection in use: ('0.0.0.0', 5000)'
The answer at Virtual env: Connection in use error worked perfectly for me:
you can find the id for unicorn to kill it via ps ax|grep unicorn and then using the id of the gunicorn instance kill <id>
I couldn't find a gunicorn process running to kill and none of the other suggestions worked for me so I tried restarting my Mac (OSX 10.9.2) with the "Reopen windows when logging back in" checkbox unchecked and after rebooting it started working again.