postgres does not start. (involves Django and Docker) - django

I do
postgres -D /path/to/data
I get
2017-05-01 16:53:36 CDT LOG: could not bind IPv6 socket: No error
2017-05-01 16:53:36 CDT HINT: Is another postmaster already running on port
5432? If not, wait a few seconds and retry.
2017-05-01 16:53:36 CDT LOG: could not bind IPv4 socket: No error
2017-05-01 16:53:36 CDT HINT: Is another postmaster already running on port
5432? If not, wait a few seconds and retry.
2017-05-01 16:53:36 CDT WARNING: could not create listen socket for "*"
2017-05-01 16:53:36 CDT FATAL: could not create any TCP/IP sockets
Can someone help me figure out what is going wrong?
When I do
psql -U postgres -h localhost
it works just fine.
Though I need to start postgres to get it running on Docker and Django
Please help
Thanks in advance
Edit:
When I do
docker-compose up
I get
$ docker-compose up
Starting asynchttpproxy_postgres_1
Starting asynchttpproxy_web_1
Attaching to asynchttpproxy_postgres_1, asynchttpproxy_web_1
postgres_1 | LOG: database system was interrupted; last known up at 2017-
05-01 21:27:43 UTC
postgres_1 | LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | LOG: invalid record length at 0/150F720: wanted 24, got 0
postgres_1 | LOG: redo is not required
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 | Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x7f51f88bef28>
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
web_1 | self.connect()
web_1 | File "/usr/local/lib/python3.6/site-packages/django/db/backends/base/base.py", line 119, in connect
web_1 | self.connection = self.get_new_connection(conn_params)
web_1 | File "/usr/local/lib/python3.6/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 176, in get_new_connection
web_1 | connection = Database.connect(**conn_params)
web_1 | File "/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py", line 164, in connect
web_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async)
web_1 | psycopg2.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 5432?
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 5432?

There are actually two problems being displayed here:
The first, when you are attempting to run PostgreSQL by hand, is that you are running it directly in your host operating system, and there is already a copy running there, occupying port 5432.
The second problem, when you are running docker, is that your Django settings are configured to point to a database on localhost. This will work when you are running outside of Docker, with that copy of PostgreSQL running directly in your host operating system, but will not work in Docker, as the web application and database server are running in separate containers.
To take care of the second problem, you simply need to update Django's settings to connect to the host postgres, since this is the name you used for your database container in your docker-compose.yml.

Your problem is the time to launch postgres on docker compose at the same time you are launching django, so is not enough time to the database setup as you can see in your error in connection TCP/IP connections on port 5432?. I have solved this problem running a bash command on docker-compose.yml and create a wait-bd.sh bash script.
wait-bd.sh
#!/bin/bash
while true; do
COUNT_PG=`psql postgresql://username:password#localhost:5432/name_db -c '\l \q' | grep "name_db" | wc -l`
if ! [ "$COUNT_PG" -eq "0" ]; then
break
fi
echo "Waiting Database Setup"
sleep 10
done
and in docker-compose.yml add the tag command in the django container:
django:
build: .
command: /bin/bash wait-bd.sh
This script gonna wait you database setup, so will run the django setup container.

Related

How to launch a Postgresql database container alongside an Ubuntu Docker image?

How do you use docker-compose to launch PostgreSQL in one container, and allow it to be accessed by an Ubuntu 22 container?
My docker-compose.yml looks like:
version: "3.6"
services:
db:
image: postgres:14-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off --max-connections=200 --shared-buffers=4GB --work-mem=20MB
tmpfs:
- /var/lib/postgresql
app_test:
build:
context: ..
dockerfile: Dockerfile
shm_size: '2gb'
volumes:
- /dev/shm:/dev/shm
My Dockerfile just runs a Django unittest suite that connects to the PostgreSQL database using the same credentials. However, it looks like the database periodically crashes or stops and starts, breaking connection with the tests.
When I run:
docker-compose -f docker-compose.yml -p myproject up --build --exit-code-from myproject_app
I get output like:
Successfully built 45c74650b75f
Successfully tagged myproject_app:latest
Creating myproject_app_test_1 ... done
Creating myproject_db_1 ... done
Attaching to myproject_app_test_1, myproject_db_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
db_1 | Data page checksums are disabled.
db_1 |
db_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1 | creating subdirectories ... ok
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... UTC
db_1 | creating configuration files ... ok
app_test_1 | SITE not set. Defaulting to myproject.
app_test_1 | Initialized settings for site "myproject".
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... sh: locale: not found
db_1 | 2022-09-23 02:06:54.175 UTC [30] WARNING: no usable system locales were found
db_1 | ok
db_1 | syncing data to disk ... initdb: warning: enabling "trust" authentication for local connections
db_1 | You can change this by editing pg_hba.conf or using the option -A, or
db_1 | --auth-local and --auth-host, the next time you run initdb.
db_1 | ok
db_1 |
db_1 |
db_1 | Success. You can now start the database server using:
db_1 |
db_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1 |
db_1 | waiting for server to start....2022-09-23 02:06:56.736 UTC [36] LOG: starting PostgreSQL 14.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
db_1 | 2022-09-23 02:06:56.737 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-09-23 02:06:56.743 UTC [37] LOG: database system was shut down at 2022-09-23 02:06:56 UTC
db_1 | 2022-09-23 02:06:56.750 UTC [36] LOG: database system is ready to accept connections
db_1 | done
db_1 | server started
db_1 | CREATE DATABASE
db_1 |
db_1 |
db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1 |
db_1 | 2022-09-23 02:06:57.100 UTC [36] LOG: received fast shutdown request
db_1 | 2022-09-23 02:06:57.101 UTC [36] LOG: aborting any active transactions
db_1 | 2022-09-23 02:06:57.104 UTC [36] LOG: background worker "logical replication launcher" (PID 43) exited with exit code 1
db_1 | 2022-09-23 02:06:57.105 UTC [38] LOG: shutting down
db_1 | waiting for server to shut down....2022-09-23 02:06:57.222 UTC [36] LOG: database system is shut down
db_1 | done
db_1 | server stopped
db_1 |
db_1 | PostgreSQL init process complete; ready for start up.
db_1 |
db_1 | 2022-09-23 02:06:57.566 UTC [1] LOG: starting PostgreSQL 14.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
db_1 | 2022-09-23 02:06:57.568 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2022-09-23 02:06:57.569 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2022-09-23 02:06:57.571 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2022-09-23 02:06:57.576 UTC [50] LOG: database system was shut down at 2022-09-23 02:06:57 UTC
db_1 | 2022-09-23 02:06:57.583 UTC [1] LOG: database system is ready to accept connections
db_1 | 2022-09-23 02:06:58.805 UTC [57] ERROR: relation "django_site" does not exist at character 78
db_1 | 2022-09-23 02:06:58.805 UTC [57] STATEMENT: SELECT "django_site"."id", "django_site"."domain", "django_site"."name" FROM "django_site" WHERE "django_site"."id" = 2 LIMIT 21
Why is it reading shutting down and database system was shut down, implying the database restarts several times? Why is Django unable to access it to initialize the schema?
When working with databases in docker-compose you always need to wait for them to fully start. Either your program needs to ping and wait (not crashing after the first failed attempt to connect to the database which is probably still starting up) or you can use the famous now wait-for-it.sh script.
Below is an example for the second approach.
Dockerfile:
FROM debian:stable
WORKDIR /scripts
RUN apt-get update && apt-get install -y curl telnet
# there are many versions on the internet, I just picked one
RUN curl -sO https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh && chmod a+x *.sh
ENTRYPOINT ["/scripts/wait-for-it.sh"]
it only prepares an image with the wait-for-it.sh script and telnet (to test the database connection)
docker-compose.yml
version: "3.6"
services:
db:
image: postgres:14-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off --max-connections=200 --shared-buffers=4GB --work-mem=20MB
tmpfs:
- /var/lib/postgresql
test:
build:
context: .
command: db:5432 -t 3000 -- telnet db 5432
The test service will wait for the database to be available before starting it's main process.
The best way to test it:
In one terminal start:
docker-compose up test
In a second terminal:
# make the operation even longer
docker rmi postgres:14-alpine
# start database
docker-compose up db
The reason why the database restarts during startup is well explained in the comments.

How to fix "Could not find datomic in catalog"

I'm trying to run datomic pro using a local postgresql, transactor an peer.
I'm able to start both the database and the transactor without any problem:
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: starting PostgreSQL 12beta3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 8.3.0) 8.3.0, 64-bit
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db-storage | 2019-09-01 21:26:34.823 UTC [1] LOG: listening on IPv6 address "::", port 5432
db-storage | 2019-09-01 21:26:34.835 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-storage | 2019-09-01 21:26:34.849 UTC [18] LOG: database system was shut down at 2019-09-01 21:25:15 UTC
db-storage | 2019-09-01 21:26:34.852 UTC [1] LOG: database system is ready to accept connections
db-transactor | Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
db-transactor | Starting datomic:sql://<DB-NAME>?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic-password, you may need to change the user and password parameters to work with your jdbc driver ...
db-transactor | System started datomic:sql://<DB-NAME>?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic-password, you may need to change the user and password parameters to work with your jdbc driver
(They're all running on containers with a network_mode=host)
I think that theses warnings may come from the fact that I'm using datomic as the user and the database name, but I'm not sure.
But then, when I try to start a peer server, I'm faced with the following error:
$ ./bin/run -m datomic.peer-server -h localhost -p 8998 -a datomic-peer-user,datomic-peer-password -d datomic,datomic:sql://datomic?jdbc:postgresql://localhost:5432/datomic?user=datomic\&password=datomic-password
Exception in thread "main" java.lang.RuntimeException: Could not find datomic in catalog
at datomic.peer$get_connection$fn__18852.invoke(peer.clj:681)
at datomic.peer$get_connection.invokeStatic(peer.clj:669)
at datomic.peer$get_connection.invoke(peer.clj:666)
at datomic.peer$connect_uri.invokeStatic(peer.clj:763)
at datomic.peer$connect_uri.invoke(peer.clj:755)
(...)
at clojure.main$main.doInvoke(main.clj:561)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.main.main(main.java:37)
I've already tried changing a bunch of configurations with no success. Can someone help me?
I faced with the same issue but I've found a solution after carefully docs exploring.
The key is presented in this section of the documentation: https://docs.datomic.com/on-prem/overview/storage.html#connecting-to-transactor
After running a transactor and before you run a peer, go to datomic base dir and execute following:
bin/shell
## for now you are inside datomic shell
uri = "datomic:sql://datomic?jdbc:postgresql://localhost:5432/datomic?user=datomic&password=datomic";
Peer.createDatabase(uri);
## Terminate your datomic shell
That's all. After that you can run peer server as you mentioned

celery error in connection with postgresql while django can

i used celery in my django project .django are running with uWSGI and can work with postgresql well but it's seem that celery can't connect to postgresql :
Traceback (most recent call last):
File "/home/classgram/www/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 216, in ensure_connection
self.connect()
File "/home/classgram/www/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 194, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/classgram/www/env/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 178, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/classgram/www/env/lib/python3.6/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: password authentication failed for user "hamclassy"
FATAL: password authentication failed for user "hamclassy"
i'm working on host and host OS is Ubuntu 18.04 . thank you
This problem has nothing to do with Celery - it is obviously a typical PostgreSQL Access control issue.
It seems like your PostgreSQL server allows hamclassy role (user) to connect from machine where Django runs, but does not allow access from your Celery workers. The right place to look for solution to your problem is the pg_hba.conf file on the PostgreSQL server.
On most Linux distributions locate pg_hba.conf should give you location of the file. Fedora has it here: /var/lib/pgsql/data/pg_hba.conf. Ubuntu on the other hand has it in /etc/postgresql. (Example: /etc/postgresql/9.6/main/pg_hba.conf)

Unable to connect to CloudSQL from Kubernetes Engine (Can't connect to MySQL server on 'localhost')

I tried to follow the steps in:
https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine.
I have the application container and the cloudsql proxy container running in the same pod.
After creating the cluster, logs for the proxy container seems correct:
$kubectl logs users-app-HASH1-HASH2 cloudsql-proxy
2018/08/03 18:58:45 using credential file for authentication; email=it-test#tutorial-bookshelf-xxxxxx.iam.gserviceaccount.com
2018/08/03 18:58:45 Listening on 127.0.0.1:3306 for tutorial-bookshelf-xxxxxx:asia-south1:it-sample-01
2018/08/03 18:58:45 Ready for new connections
However logs from the application container throws up an unable to connect on localhost error:
$kubectl logs users-app-HASH1-HASH2 app-container
...
19:27:38 users_app.1 | return Connection(*args, **kwargs)
19:27:38 users_app.1 | File "/usr/local/lib/python3.7/site-packages/pymysql/connections.py", line 327, in __init__
19:27:38 users_app.1 | self.connect()
19:27:38 users_app.1 | File "/usr/local/lib/python3.7/site-packages/pymysql/connections.py", line 629, in connect
19:27:38 users_app.1 | raise exc
19:27:38 users_app.1 | sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 2] No such file or directory)") (Background on this error at: http://sqlalche.me/e/e3q8)
The SQLALCHEMY_DATABASE_URI is 'mysql+pymysql://{user}:{password}#/{database}?unix_socket=/cloudsql/{cloudsql_connection_name}' and is populated with the correct values (credentials that I set using kubectl secrets).
I'm sure I'm doing something silly here, so I'm hoping someone more experience on GCP could take a look and provide pointers on troubleshooting this issue.
UPDATE:
I just went to the GCP kubernetes engine page and opened up a shell on the app container and tried to connect to the cloud sql instance. That seemed to have worked.
$gcloud container cluster ......... -it /bin/sh
#python
>>> import pymysql
>>> connection = pymysql.connect(host='127.0.0.1', user='user', password='password', db='db')
>>> with connection.cursor() as cursor:
... cursor.execute("show databases;")
... tables = cursor.fetchall()
...
5
But the following (when I try and connect through sqlalchemy) fails:
>>> connection = pymysql.connect(host='127.0.0.1', user='u', password='p', db='d', unix_socket='/cloudsql/CONNECTION_NAME')
...
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on '127.0.0.1' ([Errno 2] No such file or directory)")
>>> from sqlalchemy import create_engine
>>> engine = create_engine('mysql://user:password#localhost/db')
>>> engine.connect()
Traceback (most recent call last):
...
sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (2002, 'Can\'t connect to local MySQL server through socket \'/run/mysqld/mysqld.sock\' (2 "No such file or directory")') (Background on this error at: http://sqlalche.me/e/e3q8)
>>> engine = create_engine('mysql+pymysql://user:password#/db?unix_socket=/cloudsql/tutorial-bookshelf-xxxx:asia-south1:test-01')
>>> engine.connect()
Traceback (most recent call last):
...
raise exc
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 2] No such file or directory)") (Background on this error at: http://sqlalche.me/e/e3q8)
Connection to the CloudSQL via the proxy can be done by either a unix socket or a TCP connection, but you shouldn't be trying to use both at the same time.
I don't see any specifications on how you have configured your proxy, but if you wish to use a unix socket then your proxy instances flag should look like this: -instances=<INSTANCE_CONNECTION_NAME>. This will cause the proxy to create a unix socket in the /cloudsql directory that forwards traffic to your Cloud SQL instance. In this case, you'll set unix_socket=/cloudsql/<INSTANCE_CONNECTION_NAME> in your url.
If you are trying to connect via TCP socket, then use an instances flag like this: -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306. This will tell the proxy to listen on port 3306 and forward traffic to your Cloud SQL instance. In this case, you'll use host='127.0.0.1' and port=3306.
If you are looking for a hands on introduction to using CloudSQL on GKE, I encourage you to check out the codelab mentioned in this project: https://github.com/GoogleCloudPlatform/gmemegen
I have recently setup cloudSQL postgres via cloudsql-proxy. I have few questions for you.
The credentials you are using for cloudsql-proxy, do they have Cloud SQL Client Role.
Does your cloudsql-proxy container command look like this,
/cloud_sql_proxy",
"--dir=/cloudsql", "-instances==tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"
It might be helpful if you could share your kubernetes deployment.yml which has both the app and proxy containers.
Ok .. Posting an answer, but I'm not fully satisfied so I'll wait for more.
I was able to connect to the cloud SQL instance by changing the SQLALCHEMY_DATABASE_URI to 'mysql+pymysql://user:password#/db' (meaning I got rid of the unix socket connection string)
so :
>>> engine = create_engine('mysql+pymysql://user:password#/db')
>>> engine.connect()
<sqlalchemy.engine.base.Connection object at 0x7f2236bdc438>
worked for me. I'm not sure why I had to get rid of the unix socket connection string as I did enable the Cloud SQL API for my project.

uwsgi django at boot fail to start without postgresql

Debian server, uwsgi at boot started by a crontab #reboot.
return this in uwsgi.log:
ile
"/usr/local/lib/python2.7/dist-packages/django/db/backends/postgresql/base.py",
line 175, in get_new_connection
connection = Database.connect(**conn_params) File "/usr/lib/python2.7/dist-packages/psycopg2/init.py", line 164, in
connect
conn = _connect(dsn, connection_factory=connection_factory, async=async) django.db.utils.OperationalError: could not connect to
server: Connection refused Is the server running on host "127.0.0.1"
and accepting TCP/IP connections on port 5432?
Thu Mar 24 05:19:02 2016 - unable to load app 0 (mountpoint='')
(callable not found or import error) Thu Mar 24 05:19:02 2016 - * no
app loaded. going in full dynamic mode Thu Mar 24 05:19:02 2016 -
uWSGI is running in multiple interpreter mode *
if I wait that also postgresql start and restart uwsgi all is working.
Are there way to tell uwsgi to wait for postgresql?
Use systemd to start uWSGI instead of cron daily.
Create file /etc/systemd/system/uwsgi.service with content:
[Unit]
Description=uWSGI
After=syslog.target
After=postgresql#9.4-main.service
Wants=postgresql#9.4-main.service
[Service]
ExecStart=/usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini
Restart=always
KillSignal=SIGTERM
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
Change of course start command for anything you want.
If you want to start more than one uWSGI server (for more than one user perhaps), consider using uWSGI Emperor. You can even create some shared directory in which everyone can create and manage it's own files (by setting sticky bit on directory) and set emperor in tyrant mode, so every vassal will start only from user account that owns specified file, if you want to give anyone ability to create own uWSGI instances.