I deployed a k8s redis sentinel using the Helm chart: https://github.com/bitnami/charts/tree/master/bitnami/redis
I did change only these values ( https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml ) :
auth:
enabled: false
sentinel: false
sentinel:
enabled: true
masterSet: mymaster
After the deployment, I got this message:
Redis™ can be accessed via port 6379 on the following DNS name from within your cluster:
redis.default.svc.cluster.local for read only operations
For read/write operations, first access the Redis™ Sentinel cluster, which is available in port 26379 using the same domain name above.
To connect to your Redis™ server:
1. Run a Redis™ pod that you can use as a client:
kubectl run --namespace default redis-client --restart='Never' --image docker.io/bitnami/redis:6.2.6-debian-10-r103 --command -- sleep infinity
Use the following command to attach to the pod:
kubectl exec --tty -i redis-client \
--namespace default -- bash
2. Connect using the Redis™ CLI:
redis-cli -h redis -p 6379 # Read only operations
redis-cli -h redis -p 26379 # Sentinel access
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/redis 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379
This is working nicely:
kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-node-0 2/2 Running 0 2m23s
redis-node-1 2/2 Running 0 71s
redis-node-2 2/2 Running 0 43s
But regarding access - to summarize - I have two options to access redis:
read-only access at redis.default.svc.cluster.local:6379
read-write access at redis.default.svc.cluster.local:26379 (some kind of sentinel access, in the docs:
Master-Replicas with Sentinel
When installing the chart with architecture=replication and sentinel.enabled=true, it will deploy a Redis™ master StatefulSet (only one master allowed) and a Redis™ replicas StatefulSet. In this case, the pods will contain an extra container with Redis™ Sentinel. This container will form a cluster of Redis™ Sentinel nodes, which will promote a new master in case the actual one fails. In addition to this, only one service is exposed:
Redis™ service: Exposes port 6379 for Redis™ read-only operations and port 26379 for accessing Redis™ Sentinel.
For read-only operations, access the service using port 6379. For write operations, it's necessary to access the Redis™ Sentinel cluster and query the current master using the command below (using redis-cli or similar):
SENTINEL get-master-addr-by-name <name of your MasterSet. e.g: mymaster>
This command will return the address of the current master, which can be accessed from inside the cluster.
In case the current master crashes, the Sentinel containers will elect a new master node.
Now I want to connect my Flask caching module to it: https://flask-caching.readthedocs.io/en/latest/
As you can see, there is an option to connect to redis sentinel, however I have no idea how. This is the code I have:
from flask_caching import Cache
cache = Cache(app, config={
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': ['redis.default.svc.cluster.local'],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'}
)
My questions are:
What should be in param CACHE_REDIS_SENTINELS? Should I somehow get IP addresses of each node and get those there?
What should be in param CACHE_REDIS_SENTINEL_MASTER? Is it "mymaster" (sentinel -> masterSet?)
Should I connect always to the read-write server (in this case, will other replicas be used)? Or do I need to adjust my app in this way: if I write I always use the sentinel access at port 26379, and in the case that I read I connect always to the read-only 6379 port? Do I need to maintain 2 connections?
Thank you
EDIT: I was digging into the code of flask_caching and it seems this works OK (but I am not sure if replicas are used):
import time
from flask import Flask
from flask_caching import Cache
config = {
"DEBUG": True, # some Flask specific configs
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': [
['redis.default.svc.cluster.local', 26379]
],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'
}
app = Flask(__name__)
# tell Flask to use the above defined config
app.config.from_mapping(config)
cache = Cache(app)
#app.route("/")
#cache.cached(timeout=5)
def index():
return "%d\n" % time.time()
app.run()
EDIT2:
Indeed, a bit digging into flask_caching and it uses replicas as well:
in file flask_caching/backends/rediscache.py
The code is getting hosts for write and read access:
self._write_client = sentinel.master_for(master)
self._read_clients = sentinel.slave_for(master)
Cheers!
EDIT3:
Example with redis driver:
from redis.sentinel import Sentinel
sentinel = Sentinel([('redis.default.svc.cluster.local', 26379)])
redis_conn = sentinel.master_for('mymaster')
redis_conn_read = sentinel.slave_for('mymaster')
redis_conn.set('test', 'Hola!')
print(redis_conn_read.get('test'))
TLDR:
import time
from flask import Flask
from flask_caching import Cache
config = {
"DEBUG": True, # some Flask specific configs
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': [
['redis.default.svc.cluster.local', 26379]
],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'
}
app = Flask(__name__)
# tell Flask to use the above defined config
app.config.from_mapping(config)
cache = Cache(app)
#app.route("/")
#cache.cached(timeout=5)
def index():
return "%d\n" % time.time()
app.run()
For more details see my original question (EDIT and EDIT2).
In my requirements.txt I only change from django==2.2.17 to django==3.0 (or 3.1.4)
and the gunicorn webserver starts leaking postgres db connections.
(Every request increases the number of connections when I check the list of clients in pgbouncer.)
I use python 3.7 and redis (via django-redis).
How can I stop the leakage?
Is there any way to limit the number of connections to a server?
Update
The leaks also happen with django==2.2.17 if I set 'CONN_MAX_AGE': None, even if I go directly to postgres, avoiding pgbouncer.
I have installed pgadmin on a new windows laptop and when I try to create a new server, it says:
When I try to run my django app in pycharm it is giving me the same error
could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
How to solve this ?
In case someone is running the pgadmin-4 in docker, and not able to connect to postgres container, like me.
The solution is to first find the ip at which the docker image is running.
Step-1, make sure the postgres container is running.
Step-2 write command- PS C:\docker> docker ps
Should result as below or similar,
Step3- in order to find the ip address running the postgres use part of container ID and analyze like below command
PS C:\docker> docker inspect fc834
Note: Here I have only used part of container id that is fc834..
This should result the following or similar,
Step4-
Use this ip address in the connection as below with your correct username and password
You may need to installing PostgreSQL Server first.
You can verify if the folder is created in the below folder,
C:\Program Files\PostgreSQL
You can configurate your newly created server to run on localhost and port 5432.
First select the “Connection” tab in the “Create-Server” window. Then, configure the connection as follows:
Enter your server’s IP address in the “Hostname/ Address” field. Default is localhost.
Specify the “Port” as “5432”.
Enter the name of the database in the “Database Maintenance” field.
Enter your username as postgres and password (use the same password you used when previously configuring the server to accept remote connections) for the database.
Click “Save” to apply the configuration.
NOTE You first have to install PostgreSQL on your machine and run it or run it with docker.
I had the same issue. But in my case I had installed pgadmin in version 9. But also installed version 12 at the same time.
When I now uninstalled version 9, the port was already set in the config of version 12 and not given free.
So my solution was to change the port of version 12 in the postgresql.conf file. Or even simplier, change the port in the server creation from 5432 to 5433. Now you are able to create a server again.
You should uninstall Postgres and pgAdmin from your PC. Then install postgres, note that you have the option of installing pgAdmin together with Postgres, so you don't have to download pgAdmin separately. Allow the installation to complete then restart you PC. Hopefully you should be able to create your server/database
I was running postgress and pgadmin both using docker container.
sudo docker ps
sudo docker inspect <postgress_container_id>
Output:
"Networks": {
"work_file_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"postgres",
"578a7a1050d1"
],
"NetworkID": "49dbe9d7280b55e36afc4308469c1b55e051d7eea8f1c03f08728e652cf22b5b",
"EndpointID": "c30a642c5a0f2970147c9734cadfbe1e8d7c29fcba8a83a628b7c2b3db114716",
"Gateway": "172.18.0.1",
**"IPAddress": "172.18.0.4",**
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:04",
"DriverOpts": null
}
Instead of localhost put the IP obtained from above command (172.18.0.4)
In my case was I got both pgadmin and postgresql services running in separate docker containers and I was trying to connect to localhost(127.0.0.1), which is cause of unable to connect to server error.
Note: 5438 port on my computer (host machine) was mapped to 5432 port of postgresql container.
so practically there are two solutions (if you have these services in separate containers and you have mapped postgresql port to your host machine ):
1-find out your local IP (mine is 192.168.1.106) and put it in the Host field.
2- you can put two containers(pgadmin and postgres) in one network (docker network)
and instead of your local IP, put postgres container IP in the Host field.
-Another tip that may help: what I've recently find out was if you are linux user and have ufw enabled, you should allow the port.
e.g. on my computer postgres is running on 5438 port, so I performed below command (so I could connect from pgadmin container to 5438 port of host wich postgres is running)
ufw allow 5438
Execute the container with the data Eg:
docker run --name postgresdb -e POSTGRES_USER=username -e POSTGRES_PASSWORD=password -e POSTGRES_DB=mydb -p 5432:5432 --restart always -d postgres
Then in the PGAdmin client in the Host Name/Address use:
host.docker.internal
Image Conn PGAdmin
I was trying to install PostgreSQL and pgAdmin with an installer that is given here https://www.postgresql.org/download/windows/. This installer includes the PostgreSQL server, pgAdmin;
I was facing an error while starting pgAdmin: "The pgAdmin 4 server could not be contacted". I tried different solutions but did not work.
Then I uninstalled both of them, deleted the temp folder C:\Users\%USERNAME%\AppData\Roaming\pgAdmin and delete those ones too %temp%.
Then I installed the pgAdmin separately from this link https://www.pgadmin.org/download/
and it works. If you need to connect it with your local server I think you should install the PostgreSQL server first and then pgAdmin separately.
I faced the same problem. So I uninstalled pgAdmin through control panel. after that deleted the folder where pgAdmin was located. Then I went to this link and installed pgadmin whole package from there and now it works fine.
I was getting this error when I was running pgadmin in a docker container on my machine, which meant that localhost:5432 was not accessible.
I worked around this by using the native version of pgadmin.
If you are running PostgreSQL in a docker container, set the host name in pgAdmin to postgres not the mapped address or localhost.
press win key+R then Search for services.msc A window will open in that find postgresql-x64-13 right click, in that tab click start option For me its works perfectly.
Check out this stackoverflow link
unable to connect to server for Postgres
how I solved this problem in ubuntu 22.04
I didn't have a password set in Postgres, that's why the error occurred 'unable to connect server 127.0.0.0 port 5432
Open the terminal in ubuntu and enter this command;
sudo -u postgres psql
Run the statement to add new password. ALTER USER Postgres PASSWORD 'AddNewPasswordHere'; in '' you should enter your new password
Example:
1)sudo -u postgres psql
2)ALTER USER postgres PASSWORD 'mynewpassword'
3)sudo service postgresql restart
4)Then you can create a server in pgadmin
If you already tried with “127.0.0.1” and it didn’t work then use “localhost”
After two years i think this would be of good help to so many people.
You don't have to uninstall postgresql or PGADMIN from your system.
What you need to do i input the username and password for a particular user created on postgresql into the server input box.
And that all you need.
i hope this helps anyone
I've been using flower locally and it seems easy enough to setup and run, but I can't see how I would set it up in a production environment.
In particular, how can I add authentication and how would I define a url to access it?
For custom address, use the --address flag.
For auth, use the --basic_auth flag.
See below:
# celery flower --help
Usage: /usr/local/bin/celery [OPTIONS]
Options:
--address run on the given address
--auth regexp of emails to grant access
--basic_auth colon separated user-password to enable
basic auth
--broker_api inspect broker e.g.
http://guest:guest#localhost:15672/api/
--certfile path to SSL certificate file
--db flower database file (default flower.db)
--debug run in debug mode (default False)
--help show this help information
--inspect inspect workers (default True)
--inspect_timeout inspect timeout (in milliseconds) (default
1000)
--keyfile path to SSL key file
--max_tasks maximum number of tasks to keep in memory
(default 10000) (default 10000)
--persistent enable persistent mode (default False)
--port run on the given port (default 5555)
--url_prefix base url prefix
--xheaders enable support for the 'X-Real-Ip' and
'X-Scheme' headers. (default False)
You an use https://pypi.org/project/django-revproxy/
This way Flower is hidden behind Django auth which, and you don't need rewrite rule in your webserver.
Orignal source of this answer: Celery Flower Security in Production
I have a Django app deployed to Heroku, with a worker process running celery (+ celerycam for monitoring). I am using RedisToGo's Redis database as a broker. I noticed that Redis keeps running out of memory.
This is what my procfile looks like:
web: python app/manage.py run_gunicorn -b "0.0.0.0:$PORT" -w 3
worker: python lipo/manage.py celerycam & python app/manage.py celeryd -E -B --loglevel=INFO
Here's the output of KEYS '*':
"_kombu.binding.celeryd.pidbox"
"celeryev.643a99be-74e8-44e1-8c67-fdd9891a5326"
"celeryev.f7a1d511-448b-42ad-9e51-52baee60e977"
"_kombu.binding.celeryev"
"celeryev.d4bd2c8d-57ea-4058-8597-e48f874698ca"
`_kombu.binding.celery"
celeryev.643a99be-74e8-44e1-8c67-fdd9891a5326 is getting filled up with these messages:
{"sw_sys": "Linux", "clock": 1, "timestamp": 1325914922.206671, "hostname": "064d9ffe-94a3-4a4e-b0c2-be9a85880c74", "type": "worker-online", "sw_ident": "celeryd", "sw_ver": "2.4.5"}
Any idea what I can do to purge these messages periodically?
Is that a solution?
in addition to _kombu.bindings.celeryev set there will be e.g. celeryev.i-am-alive. keys with TTL set (e.g. 30sec);
celeryev process adds itself to bindings and periodically (e.g. every 5 sec) updates the celeryev.i-am-alive. key to reset the TTL;
before sending the event worker process checks not only smembers on _kombu.bindings.celeryev but the individual celeryev.i-am-alive. keys as well and if key is not found (expired) then it gets removed from _kombu.bindings.celeryev (and maybe the del celeryev. or expire celeryev. commands are executed).
we can't just use keys command because it is O(N) where N is the total number of keys in DB. TTLs can be tricky on redis < 2.1 though.
expire celeryev. instead of del celeryev. can be used in order to allow temporary offline celeryev consumer to revive, but I don't know if it worths it.
author