I have set the value to Redis server externally using python script.
r = redis.StrictRedis(host='localhost', port=6379, db=1)
r.set('foo', 'bar')
And tried to get the value from web request using django cache inside views.py.
from django.core.cache import cache
val = cache.get("foo")
It is returning None. But when I tries to get it form
from django_redis import get_redis_connection
con = get_redis_connection("default")
val = con.get("foo")
It is returning the correct value 'bar'. How cache and direct connections are working ?
Libraries usually use several internal prefixes to store keys in redis, in order not to be mistaken with user defined keys.
For example, django-redis-cache, prepends a ":1:" to every key you save into it.
So for example when you do r.set('foo', 'bar'), it sets the key to, ":1:foo". Since you don't know the prefix prepended to your key, you can't get the key using a normal get, you have to use it's own API to get.
r.set('foo', 'bar')
r.get('foo') # None
r.get(':1:foo') # bar
So in the end, it returns to the library you use, go read the code for it and see how it exactly saves the keys. redis-cli can be your valuable friend here. Basically set a key with cache.set('foo', 'bar'), and go into redis-cli and check with 'keys *' command to see what key was set for foo.
Related
I'm debugging a flask app and want to see which values were stored in my simple cache. Is there a way to retrieve all keys? (The same way you might with a dictionary...
cache = Cache()
cache.init_app(app, config={"CACHE_TYPE": "simple"})
cache.set("item-1", "red")
cache.set("item-2", "blue")
# I would like to do the following:
# cache.keys()
Based on the source code for Flask-Caching (Don't use Flask-Cache cause it's very dated)...
There doesn't appear to be a built in method to get all the values without providing the keys, but for debugging you could do something like:
>>> for k in cache.cache._cache:
... print (k, cache.get(k))
...
item-1 red
item-2 blue
This appears to return a value of None for expired items:
item-1 None
cache.cache._cache is the dictionary with pickled values.
However you should also be aware that the 'simple' cache isn't really thread safe, as it only uses a dictionary for strorage. You should switch to a different backend like Redis for larger apps.
When using redis as the backend for flask_caching there is no method exposed to query all the keys.
But we can query the redis instance ourselves.
From the source
k_prefix = cache.cache.key_prefix
keys = cache.cache._write_client.keys(k_prefix + '*')
keys = [k.decode('utf8') for k in keys]
keys = [k.replace(k_prefix, '') for k in keys]
print(keys)
values = cache.get_many(*keys)
print(values)
I have a resolver and I gave it a key to save into django-redis, I can see the key and value inside redis but somehow the loading time is still the same.
If I am doing a regular rest it will work fine but somehow graphql doesn't seem to be working properly?
def resolve_search(self, info, **kwargs):
redis_key = 'graphl_search'
if cache.keys(redis_key): # check if key exists, if do render the value
print('using redis') # this will run the second time since key exists for 3600 seconds
return cache.get(redis_key)
# set redis
print('set redis') # this will run the first time since they key does not exist yet
my_model = Model.objects.filter(slug=kwargs.get('slug')).first()
cache.set(redis_key, my_model, timeout=3600)
return my_model
works properly and doesn't go into the rest of the block if key exists, but when I checked the time it would be the same.
1st time (5xx ms)
2nd time (5xx ms)
Am I doing something wrong or that is now how to should use redis with graphql?
Thanks in advance for any help or suggestions
I recently migrated from database backed sessions to sessions stored via memcached using pylibmc.
Here is my CACHES, SESSION_CACHE_ALIAS & SESSION_ENGINE in my settings.py
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': ['127.0.0.1:11211'],
}
}
SESSION_CACHE_ALIAS = 'default'
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
Everything is working fine behind the scenes and I can see that it is using the new caching system. Running the get_stats() method from pylibmc shows me the number of current items in the cache and I can see that it has gone up by 1.
The issue is I'm unable to grab the session manually using pylibmc.
Upon inspecting the request session data in views.py:
def my_view(request):
if request.user.is_authenticated():
print request.session.session_key
# the above prints something like this: "1ay2kcv7axb3nu5fwnwoyf85wkwsttz9"
print request.session.cache_key
# the above prints something like this: "django.contrib.sessions.cache1ay2kcv7axb3nu5fwnwoyf85wkwsttz9"
return HttpResponse(status=200)
else:
return HttpResponse(status=401)
I noticed that when printing cache_key, it prints with the default KEY_PREFIX whereas for session_key it didn't. Take a look at the comments in the code to see what I mean.
So I figured, "Ok great, one of these key names should work. Let me try grabbing the session data manually just for educational purposes":
import pylibmc
mc = pylibmc.Client(['127.0.0.1:11211'])
# Let's try key "1ay2kcv7axb3nu5fwnwoyf85wkwsttz9"
mc.get("1ay2kcv7axb3nu5fwnwoyf85wkwsttz9")
Hmm nothing happens, no key exists by that name. Ok no worries, let's try the cache_key then, that should definitely work right?
mc.get("django.contrib.sessions.cache1ay2kcv7axb3nu5fwnwoyf85wkwsttz9")
What? How am I still getting nothing back? As I test I decide to set and get a random key value to see if it works and it does. I run get_stats() again just to make sure that the key does exist. I also test the web app to see if indeed my session is working and it does. So this leads me to conclude that there is a different naming scheme that I'm unaware of.
If so, what is the correct naming scheme?
Yes, the cache key used internally by Django is, in general, different to the key sent to the cache backend (in this case pylibmc / memcached). Let us call these two keys the django cache key and the final cache key respectively.
The django cache key given by request.session.cache_key is for use with Django's low-level cache API, e.g.:
>>> from django.core.cache import cache
>>> cache.get(request.session.cache_key)
{'_auth_user_hash': '1ay2kcv7axb3nu5fwnwoyf85wkwsttz9', '_auth_user_id': u'1', '_auth_user_backend': u'django.contrib.auth.backends.ModelBackend'}
The final cache key on the other hand, is a composition of the key prefix, the django cache key, and the cache version number. The make_key function (from Django docs) below demonstrates how these three values are composed to generate this key:
def make_key(key, key_prefix, version):
return ':'.join([key_prefix, str(version), key])
By default, key_prefix is the empty string and version is 1.
Finally, by inspecting make_key we find that the correct final cache key to pass to mc.get is
:1:django.contrib.sessions.cache1ay2kcv7axb3nu5fwnwoyf85wkwsttz9
which has the form <KEY_PREFIX>:<VERSION>:<KEY>.
Note: the final cache key can be changed by defining KEY_FUNCTION in the cache settings.
I am using:
A Python application in Bluemix
Bluemix cloudant v2.0.0b2 database linked to the Python app
According to https://pypi.python.org/pypi/cloudant/2.0.0b2, everything broke from 0.5 to 2.0, and they are still working on the documentation as everything is Beta. Next to this, I am also new to Python and databases. Documentation can be found here:
http://python-cloudant.readthedocs.io/en/latest/getting_started.html
What I am trying to do is check if a document already exists.
Things that I have tried:
from cloudant.account import Cloudant
import time
import json
# Connect to the database
client = Cloudant(*hidden*)
client.connect()
# The database we work in
db = client['myDatabase']
# The document we work on
doc = db['myDocument']
print doc.exists()
But the code fails before retrieving the document. I checked the source code, and it looks like it is supposed to:
def __getitem__(self, key):
if key in list(self.keys()):
return super(CouchDatabase, self).__getitem__(key)
if key.startswith('_design/'):
doc = DesignDocument(self, key)
else:
doc = Document(self, key)
if doc.exists():
doc.fetch()
super(CouchDatabase, self).__setitem__(key, doc)
return doc
else:
raise KeyError(key)
Source: https://pypi.python.org/pypi/cloudant/2.0.0b2
Is there a way I can check if the document exists before I retrieve it? Or should I retrieve it and catch the error? Or is there a different approach?
The behavior you are describing is the desired behavior for the python-cloudant library database object, so if you intend to use the database object to retrieve your documents and populate your local database cache you should look to except a KeyError in the event of a non-existent document and handle accordingly. However, if are interested in capturing whether a document exists before bringing it into your local database cache then changing your code to something like:
from cloudant.account import Cloudant
from cloudant.document import Document
# Connect to the database
client = Cloudant(*hidden*)
client.connect()
# The database we work in
db = client['myDatabase']
# The document we work on
if Document(db, 'myDocument').exists():
doc = db['myDocument']
would do the trick.
Similarly you could just do:
from cloudant.account import Cloudant
from cloudant.document import Document
# Connect to the database
client = Cloudant(*hidden*)
client.connect()
# The database we work in
db = client['myDatabase']
# The document we work on
doc = Document(db, 'myDocument')
if doc.exists():
doc.fetch()
But this would not populate your local database cache, the db dictionary.
I have a series of caches which follow this pattern:
key_x_y = value
Like:
'key_1_3' = 'foo'
'key_2_5' = 'bar'
'key_1_7' = 'baz'
Now I'm wondering how can I iterate over all keys to match pattern like key_1_* to get foo and baz using the native django cache.get()?
(I know that there are way, particularly for redis, that allow using more extensive api like iterate, but I'd like to stick to vanilla django cache, if possible)
This is not possible using standard Django's cache wrapper. As the feature to search keys by pattern is a backend dependent operation and not supported by all the cache backends used by Django (e.g. memcached does not support it but Redis does). So you will have to use a custom cache wrapper with cache backend that supports this operation.
Edit:
If you are already using django-redis then you can do
from django.core.cache import cache
cache.keys("foo_*")
as explained here.
This will return list of keys matching the pattern then you can use cache.get_many() to get values for these keys.
cache.get_many(cache.keys("key_1_*"))
If the cache has following entries:
cache = {'key_1_3': 'foo', 'key_2_5': 'bar', 'key_1_7': 'baz'}
You can get all the entries which has key key_1_*:
x = {k: v for k, v in cache.items() if k.startswith('key_1')}
Based on the documentation from django-redis
You can list all the keys with a pattern:
>>> from django.core.cache import cache
>>> cache.keys("key_1_*")
# ["key_1_3", "key_1_7"]
once you have the keys you can get the values from this:
>>> [cache.get(k) for k in cache.keys("key_1_*")]
# ['foo', 'baz']
You can also use cache.iter_keys(pattern) for efficient implementation.
Or, as suggested by #Muhammad Tahir, you can use cache.get_many(cache.keys("key_1_*")) to get all the values in one go.
I saw several answers above mentioning django-redis.
Based on https://pypi.org/project/django-redis/
You can actually use delete_pattern() method
from django.core.cache import cache
cache.delete_pattern('key_1_*')