What LOCATION should I point Memcached to after deployment on pythonanywhere server? For local I am using this setting and things are working fine.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
I need to change 'LOCATION' to replace localhost. Any guidance?
You can location set to a path.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'unix:~/memcached.sock', }}
However, I don't think pythonanywhere lets you use memcache, since you can't use 'sudo apt-get' on a pythonanywhere console, and using memcache requires you to install it. (sudo apt-get install memcached)
Related
I configured ElastiCache redis with cluster mode enabled.
I want to connect ElastiCache with local Django.
So I configured bastion host.
I connected ElastiCache(non-cluster mode) with local Django. I tried cache.set(), cache.get(). It's OK.
I installed 'Django-redis-cache' and 'settings.py' is like that.
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': 'localhost:6379',
}
}
But I have problem when I connect ElastiCache(cluster mode) with django.
I tried tunneling with ElastiCache configuration endpoint.
When I use the same 'settings.py', error message is like that.
'SELECT is not allowed in cluster mode'
So, I changed 'settings.py'.
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': 'localhost:6379',
'OPTIONS': {
'DB': 0
},
}
}
And then, error message is like that.
'MOVED 4205 xx.xx.xx.xxx:6379'
What I have to do?
There are no example which connect ElastiCache(cluster mode) with Django.
redis.service is active and I can connect to redis-cli but in Django I got the issue when Celery try to get access to redis:6379 (as I understand).
CELERY_RESULT_BACKEND = os.environ.get('REDISCLOUD_URL', 'redis://localhost:6379/0')
CACHEOPS_REDIS = os.environ.get('CACHEOPS_REDIS', 'redis://localhost:6379/9')
CACHES = {
'default': {
'BACKEND': 'redis_lock.django_cache.RedisCache',
'LOCATION': 'redis://127.0.0.1:6379/',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'DB': 3,
}
}
}
It happened after I reinstalled Ubuntu. Before was ok.
How can I set up collectfast for django on heroku?
This is assuming I've already successfully set up static files hosting and serving from Amazon S3.
1) To disable heroku's automatic collectstatic, run:
heroku config:set DISABLE_COLLECTSTATIC=1
2) Add the following to settings.py to use a table in your database for the collectfast's caching. Commit and push the change to heroku.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
},
'collectfast': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'collectfast_cache',
'TIMEOUT': 60,
'OPTIONS': {
'MAX_ENTRIES': 10000
},
},
}
COLLECTFAST_CACHE = 'collectfast'
4) To create the required cache table in the database, run:
heroku run createcachetable
5) To restore heroku's automatic collectstatic, run:
heroku config:unset DISABLE_COLLECTSTATIC
Each deploy will now correctly use collectfast to collect modified static files to s3.
I'm attempting to use django-redis using Unix sockets rather than a TCP connection:
This is the settings.py configuration:
CACHES = {
'default': {
'BACKEND': 'redis_cache.cache.RedisCache',
'LOCATION': 'unix:/tmp/redis.sock:1',
'OPTIONS': {
'PASSWORD': '',
'PICKLE_VERSION': -1, # default
'PARSER_CLASS': 'redis.connection.HiredisParser',
'CLIENT_CLASS': 'redis_cache.client.DefaultClient',
},
},
}
and this is an extract of the redis config file at /etc/redis/6379.conf:
# Specify the path for the unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
unixsocket /tmp/redis.sock
unixsocketperm 755
Still I receive a ConnectionInterrumped exception, which stands for an error during connection. Any ideas about what this configuration's issue is?
P.S. My Django version is 1.5.1, django-redis is 3.3 and hiredis is 0.0.1.
EDIT: Apparently I read the cache provider wrong, the below answer is the solution for django-redis-cache, not django-redis. I'll let the answer remain though, since changing cache provider and using this config seems to have solved the problem.
You should not need the unix: prefix, and the backend setting looks strange;
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': '/tmp/redis.sock',
'OPTIONS': { ...
I used to be using Whoosh as a search backend but now I'm switching to elasticsearch and trying to get things working.
When trying to rebuild the index I get the error:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /_bulk?op_type=create (Caused by <class 'socket.error'>: [Errno 61] Connection refused)
The following is in my settings.py:
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://localhost:8000/',
'INDEX_NAME': 'haystack',
},
}
My question is, what is URL used for and what do I put here? I'm running things locally for development and I'm deployed on Heroku.
The port should be 9200.
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'haystack',
},
}
Also, you have to be sure that you are using the development version (2.0) of haystack.
Edit:
You probably want to make sure first that ElasticSearch is running by executing the following command:
curl -XGET 'http://127.0.0.1:9200/my_index/_mapping?pretty=1'