Django cannot persist my cached data even if I set timeout to none.
My settings.py contains this:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'django_cache', #'/var/tmp/django_cache',
'TIMEOUT': None,
}
}
I save data with this line of code:
cache.set('array', array, timeout=None)
I fetch data like this:
array = cache.get('array')
try:
iterator = iter(array)
except TypeError:
# Array not iterable: my app gets here when cache data is lost
else:
# I go through the array and get needed info
I don't think it's a MAX_ENTRIES issue because I only have one array with 39 elements.
When the data is lost, the array becomes not iterable (because empty).
I also tried using file cache cause I suspected that restarting the Django app may clear the RAM cache but had the same issue.
I've tried file caching with the following configuration in settings.py:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': os.path.join(BASE_DIR, 'filecache'),
'TIMEOUT': None,
}
}
The "filecache" directory is created and contains a non-empty cache file but I still loose my cached array somehow.
Related
I am trying to apply cache in my Django project using memcache.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
I am applying per view cache as below:
from django.views.decorators.cache import cache_page
url(r'^(?P<slug>[\w-]+)/default/$', cache_page(60 * 15)(default_view), name='default_view')
I am trying to dedug the code in my IDE.
I found the following function at lib/python3.6/site-packages/django/views/decorators/cache.py
def cache_page(timeout, *, cache=None, key_prefix=None):
return decorator_from_middleware_with_args(CacheMiddleware)(
cache_timeout=timeout, cache_alias=cache, key_prefix=key_prefix
)
I tried to create a stop at return line. But I found when i refresh the url the code does not stop at this line.
As already #Alasdair mentioned in the comment if you set a breakpoint in cache_page body your it will be executed only once.
If you want to follow the request you should set your breakpoint in CacheMiddleware methods either in process_response or process_request
I used to cache a database query in a global variable to speed up my application. Since this is strongly unadvised (and it did generate problems), I want to use any kind of Django cache instead. I tried LocMemCache and DatabaseCache, but both take... about 15 seconds to set my variable (twice longer than it take to generate the data, which is 7MB in size).
Is that expected ? Am I doing something wrong ?
(Memcached is limited to 1MB, and I cannot split my data, which consists in arbitrarily big binary masks).
Edit: FileBasedCache takes 30s to set as well.
Settings.py:
CACHES = {
'default': {...},
'stats': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
# or 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'stats',
},
}
Service.py:
from django.core.cache import caches
def stats_service():
stats_cache = caches['stats']
if stats_cache.get('key') is None:
stats_cache.set('key', data) # 15s with DatabaseCache, 30s with LocMemCache
return stats_cache.get('key')
Global variable (super fast) version:
_cache = {}
def stats_service():
if _cache.get('key') is None:
_cache['key'] = data
return _cache['key']
One option may be to use diskcache.DjangoCache. DiskCache extends the Django cache API to support writing and reading binary streams as-is (avoid pickling). It works particularly well for large values (like those greater than 1MB). DiskCache is an Apache2 licensed disk and file backed cache library, written in pure-Python, and compatible with Django.
In your case, you could use ndarray tostring and numpy fromstring methods to quickly convert to/from a Python string. Then wrap the string with io.StringIO to store/retrieve in the cache. For example:
from django.core.cache import cache
value = cache.get('cache-key', read=True)
if value:
data = numpy.fromstring(value.read())
value.close()
else:
data = ... # Generate 7MB array.
cachge.set('cache-key', io.StringIO(data.tostring()), read=True)
DiskCache extends the Django cache API by permitting file-like values which are stored as binary blobs on disk. The Django cache benchmarks page has a discussion and comparison of alternative cache backends.
This snippet actually works fine: https://djangosnippets.org/snippets/2396/
As I understood, the only problem with using global variables for caching is thread safety, and this no-pickle version is thread-safe.
How this could be done?
For now I get only indexes that are addressed to the page titles. Like cms.title objects are indexed.
I have add the app like intended:
#settings.py
#TODO: It's not recommended for production use but it will return results.
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.simple_backend.SimpleEngine',
},
}
HAYSTACK_ROUTERS = ['aldryn_search.router.LanguageRouter', ]
ALDRYN_SEARCH_REGISTER_APPHOOK = True
Is there any trigger that I'm unaware of?
Though it's a old question, i hope this may help other people
I also added
INSTALLED_APPS = [ 'aldryn_search', 'spurl', 'haystack', 'standard_form' ]
see Mention that the aldryn search view requires standard_form #87
I'm trying to use django-cache-machine to cache queries within my application, but I want to use Redis as a backend. The docs don't really explain how to do this, yet the repository is filled with Redis references, so I'm pretty sure it's possible. I want to make sure I do it right though, so I'm wondering if anyone has any experience with configuring this and maybe more importantly, knows if there are any caveats?
In your settings set:
CACHE_MACHINE_USE_REDIS = True
REDIS_BACKEND = redis://127.0.0.1:6379?socket_timeout=0.1
https://github.com/jbalogh/django-cache-machine/blob/master/caching/invalidation.py#L187
https://github.com/jbalogh/django-cache-machine/blob/master/caching/invalidation.py#L213
I have a little experience in my project, a report system that generate tables from about 50 million records.
The database is Mysql and I could show my settings and models FYI.
settings:
# cache machine
CACHES = {
'default': {
'BACKEND': 'caching.backends.memcached.MemcachedCache',
'LOCATION': [
'127.0.0.1:11211',
],
'PREFIX': 'report:',
},
}
CACHE_COUNT_TIMEOUT = 60 * 24 # one day
CACHE_EMPTY_QUERYSETS = True
models:
class App(**CachingMixin**, models.Model):
**objects = CachingManager()**
name = models.CharField(max_length=64,
default='')
Note that cache-machine works fine for query_set.filter and count, not good for query_set.annotate or aggregate. Of course do not forget launch your memcache client first.
And when running you can see cache-machine logs in your django*.log to tell you hit or miss cache.
Just saw this configuration in one of the project setting.py
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'KEY_PREFIX' : 'projectabc:',
'LOCATION': [
('10.1.1.1:11211', 1),
('10.1.1.2:11211', 1),
('10.1.1.3:11211', 1),
('10.1.1.4:11211', 1),
],
}
}
Just curious why have tuples inside LOCATION? what's the "1" in the tuple for?
Here in python-memcached, location ultimately gets sent to this function. It seems its a redundant (but helpful reminder) that a weight param exists.
def set_servers(self, servers):
"""
Set the pool of servers used by this client.
#param servers: an array of servers.
Servers can be passed in two forms:
1. Strings of the form C{"host:port"}, which implies a default weight of 1.
2. Tuples of the form C{("host:port", weight)}, where C{weight} is
an integer weight value.
"""