Does django use cache by default? - django

Is their predefined default caching system for django projects if nothing is configured ?
If yes. how it works ?

If you configure your view or method to use cache, it will use Local Memory Cache. It has separate in-memory thread-safe cache for each process. Quite nice temporal solution.

From Django cache documentation
Local-memory caching
This is the default cache if another is not specified in your settings
file. If you want the speed advantages of in-memory caching but don’t
have the capability of running Memcached, consider the local-memory
cache backend. This cache is per-process (see below) and thread-safe.

Related

What caching backends can be implemented in django

Which of the caching strategies can be implemented in django?
What are the pros and cons of each cache backends (in terms of ease of use and ease of developing)?
Which backend should be preferred for production etc.
There are several backends that Django supports.
Listing a few here with comments around the positives and negatives of each.
Memcached: The gold standard for caching. An in-memory service that can return keys at a very fast rate. Not a good choice if your keys are very large in size
Redis: A good alternative to Memcached when you want to cache very large keys (for example, large chunks of rendered JSON for an API).
Dynamodb: Another good alternative to Memcached when you want to cache very large keys. Also scales very well with little IT overhead.
Localmem: Only use for local testing; don’t go into production with this cache type
Database: It’s rare that you’ll find a use case where the database caching makes sense. It may be useful for local testing, but otherwise, avoid.
File system: Can be a trap. Although reading and writing files can be faster than making sql queries, it has some pitfalls. Each cache is local to the application server (not shared), and if you have a lot of cache keys, you can theoretically hit the file system limit for number of files allowed.
Dummy: A great backend to use for local testing when you want your data changes to be made immediately without caching. Be warned: permanently using dummy caching locally can hide bugs from you until they hit an environment where caching is enabled.

Synchronize Infinispan cache entries with database

I want to know whether I can use Infinispan for cached data synchronization with Oracle database. This is my scenario. I have two main applications. One is highly concurrent use app and the second is used as admin module. Since it is highly concurrent I want to reduce database connections (Load the entities into cache (read write enable) and use it from this place without calling database). But meantime I want to update database according to the cache changes because admin module is using database directly. Can that update process (cache to database) handle in entity level without involving application? Please let me know whether Infinispan supports this scenario or not. If supports please share ideas.
Yes, it is possible. Infinispan supports this use case.
This should be simple configuration "problem" only. All you need to use is properly configured CacheStore with passivation disabled. It will keep your cache (used by highly concurrent application) synchronized with database.
What does it exactly cause?
When passivation is disabled, whenever an element is modified, added
or removed, then that modification is persisted in the backend store
via the cache loader. There is no direct relationship between eviction
and cache loading. If you don't use eviction, what's in the persistent
store is basically a copy of what's in memory.
By memory is meant a cache here.
If you want to know even more about this and about other interesting options please see: https://docs.jboss.org/author/display/ISPN/Cache+Loaders+and+Stores#CacheLoadersandStores-cachepassivation
Maybe it's worth to consider aforementioned eviction. Whether to disable or enable it. It depends mainly on load generated by your highly concurrent application.
Actually, this only works when you use Infinispan in the same cluster for the admin module. If you load A in memory with Infinispan, change A to something else in the database directly with the admin module, then Infinispan will not know A has been updated.

How does caching affect memory consumption?

I have an app which has a search feature. This feature looks up the search term in a giant object (dictionary) that I cache for 24 hours. The object is about 50,000 keys and weighs roughly 10MB.
When I profile the memory usage on my hosting, I notice that after a few queries, the memory usage goes from around 50MB to over 450MB, prompting my hosting provider to kill the app.
So I'm wondering what is going on here. Specifically, how does the cache utilize the memory on each request and what can I do to fix this?
Django FileBasedCache is known for having performance issues. You can get a big picture on the following links:
A smarter filebasedcache for Django
Bug: File based cache not very efficient with large amounts of cached files
Bug was set as wont fix arguing:
I'm going to wontfix, on the grounds that the filesystem cache is intended as an easy way to test caching, not as a serious caching
strategy. The default cache size and the cull strategy implemented by
the file cache should make that obvious.
Consider using a KVS like Memcache or Redis as a caching strategy because they both support expiry. Also, consider a dedicated search like ElasticSearch if more anticipated features will be search-related.
Tools are howtos are available:
Installing memcached for a django project
http://code.google.com/p/memcached/wiki/NewStart
http://redis.io/commands/expire
https://github.com/bartTC/django-memcache-status
http://www.elasticsearch.org/guide/reference/index-modules/cache.html

Differences between mod_cache and memcached

i want to know what are the differences between mod_cache and memcached. I've work in a django site, so i'm using mod_wsgi in apache2. My question is should cache behind apache or django-memcached ?
If you need a simple content cache, where your responses are cached, then mod_cache seems like an easier bet. It's really just meant to cache responses for you to either memory or disk.
Memcached is an entirely different beast in my opinion. It works entirely in memory, but it can work across multiple machines, and your backend layer can access results from the cache. So your backend can put things in the cache so it doesn't have to do any expensive work again. It goes beyond what a normal content cache like mod_cache gives you.

Cache a Django view with GET parameters on a shared server

I'm developing a Django application on a shared server (Dreamhost).
A view that I'm implementing takes several HTTP GET parameters to perform database lookups and return serialized data. Some of these lookups generate several hundreds of kilobytes of data that is expensive to compute. Caching this data would be ideal as it would save both DB access and computation time. I have two questions:
The Django documentation mentions that the cache middleware doesn't cache requests with GET or POST parameters. Is there any way around this?
The Dreamhost wiki indicates that either Filesystem caching or Database caching are best suited for Dreamhost sites. Which of these will be better in terms of performance, setup, and maintainability. Are there any alternatives for shared hosting?
I'm also open to suggestions for other solutions to my problem.
Thanks in advance!
-Advait
About the cache requests with GET parameters:
Cache a django view that has URL parameters
Filesystem caching is usually fast enough, easy to setup, and maintenance is same as managing any directory. Delete the cache by removing the files in the cache directory.