graphene django with redis does not work? - django

I have a resolver and I gave it a key to save into django-redis, I can see the key and value inside redis but somehow the loading time is still the same.
If I am doing a regular rest it will work fine but somehow graphql doesn't seem to be working properly?
def resolve_search(self, info, **kwargs):
redis_key = 'graphl_search'
if cache.keys(redis_key): # check if key exists, if do render the value
print('using redis') # this will run the second time since key exists for 3600 seconds
return cache.get(redis_key)
# set redis
print('set redis') # this will run the first time since they key does not exist yet
my_model = Model.objects.filter(slug=kwargs.get('slug')).first()
cache.set(redis_key, my_model, timeout=3600)
return my_model
works properly and doesn't go into the rest of the block if key exists, but when I checked the time it would be the same.
1st time (5xx ms)
2nd time (5xx ms)
Am I doing something wrong or that is now how to should use redis with graphql?
Thanks in advance for any help or suggestions

Related

flask how to keep database queries references up to date

I am creating a flask app with two panels one for the admin and the other is for users. In the app scheme I have a utilities file where I keep most of the redundant variables besides other functions, (by redundant i mean i use it in many different parts of the application)
utilities.py
# ...
opening_hour = db_session.query(Table.column).one()[0] # 10:00 AM
# ...
The Table.column or let's say the opening_hour variable's value above is entered to the database by the admin though his/her web panel. This value limits the users from accessing certain functionalities of the application before the specified hour.
The problem is:
If the admin changes that value through his/her web panel, let's say to 11:00 AM. the changes is not being shown directly in the users panel."even though it was entered to the database!".
If I want the new opening_hour's value to take control. I have to manually shutdown the app and restart it "sometimes even this doesn't work"
I have tried adding gc.collect()...did nothing. There must be a way around this other than shutting and restarting the app manually. first, I doubt the admin will be able to do that. second, even if he/she can, that would be really frustrating.
If someone can relate to this please explain why is this occurring and how to get around it. Thanks in advance :)
You are trying to add advanced logic to a simple variable: You want to query the DB only once, and periodically force the variable to update by re-loading the module. That's not how modules and the import mechanism is supposed to be used.
If you want to access a possibly changing value from the database, you have to read it over and over again.
The solution is to, instead of a variable, define a function opening_hours that executes the DB query every time you check the value
def opening_hours():
return (
db_session.query(Table.column).one()[0], # 10:00 AM
db_session.query(Table.column).one()[1] # 5:00 PM
)
Now you may not want to have to query the Database every time you check the value, but maybe cache it for a few minutes. The easiest would be to use cachetools for that:
import cachetools
cache = cachetools.TTLCache(maxsize=10, ttl=60) # Cache for 60 seconds
#cachetools.cached(cache)
def opening_hours():
return (
db_session.query(Table.column).one()[0], # 10:00 AM
db_session.query(Table.column).one()[1] # 5:00 PM
)
Also, since you are using Flask, you can create a route decorator that controls access to your views depending on the view of the day
from datetime import datetime, time
from functools import wraps
from flask import g, request, render_template
def only_within_office_hours(f):
#wraps(f)
def decorated_function(*args, **kwargs):
start_time, stop_time = opening_hour()
if start_time <= datetime.now().time() <= stop_time:
return render_template('office_hours_error.html')
return f(*args, **kwargs)
return decorated_function
that you can use like
#app.route('/secret_page')
#login_required
#only_within_office_hours
def secret_page():
pass

Django Sessions via Memcache: Cannot find session key manually

I recently migrated from database backed sessions to sessions stored via memcached using pylibmc.
Here is my CACHES, SESSION_CACHE_ALIAS & SESSION_ENGINE in my settings.py
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyLibMCCache',
'LOCATION': ['127.0.0.1:11211'],
}
}
SESSION_CACHE_ALIAS = 'default'
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
Everything is working fine behind the scenes and I can see that it is using the new caching system. Running the get_stats() method from pylibmc shows me the number of current items in the cache and I can see that it has gone up by 1.
The issue is I'm unable to grab the session manually using pylibmc.
Upon inspecting the request session data in views.py:
def my_view(request):
if request.user.is_authenticated():
print request.session.session_key
# the above prints something like this: "1ay2kcv7axb3nu5fwnwoyf85wkwsttz9"
print request.session.cache_key
# the above prints something like this: "django.contrib.sessions.cache1ay2kcv7axb3nu5fwnwoyf85wkwsttz9"
return HttpResponse(status=200)
else:
return HttpResponse(status=401)
I noticed that when printing cache_key, it prints with the default KEY_PREFIX whereas for session_key it didn't. Take a look at the comments in the code to see what I mean.
So I figured, "Ok great, one of these key names should work. Let me try grabbing the session data manually just for educational purposes":
import pylibmc
mc = pylibmc.Client(['127.0.0.1:11211'])
# Let's try key "1ay2kcv7axb3nu5fwnwoyf85wkwsttz9"
mc.get("1ay2kcv7axb3nu5fwnwoyf85wkwsttz9")
Hmm nothing happens, no key exists by that name. Ok no worries, let's try the cache_key then, that should definitely work right?
mc.get("django.contrib.sessions.cache1ay2kcv7axb3nu5fwnwoyf85wkwsttz9")
What? How am I still getting nothing back? As I test I decide to set and get a random key value to see if it works and it does. I run get_stats() again just to make sure that the key does exist. I also test the web app to see if indeed my session is working and it does. So this leads me to conclude that there is a different naming scheme that I'm unaware of.
If so, what is the correct naming scheme?
Yes, the cache key used internally by Django is, in general, different to the key sent to the cache backend (in this case pylibmc / memcached). Let us call these two keys the django cache key and the final cache key respectively.
The django cache key given by request.session.cache_key is for use with Django's low-level cache API, e.g.:
>>> from django.core.cache import cache
>>> cache.get(request.session.cache_key)
{'_auth_user_hash': '1ay2kcv7axb3nu5fwnwoyf85wkwsttz9', '_auth_user_id': u'1', '_auth_user_backend': u'django.contrib.auth.backends.ModelBackend'}
The final cache key on the other hand, is a composition of the key prefix, the django cache key, and the cache version number. The make_key function (from Django docs) below demonstrates how these three values are composed to generate this key:
def make_key(key, key_prefix, version):
return ':'.join([key_prefix, str(version), key])
By default, key_prefix is the empty string and version is 1.
Finally, by inspecting make_key we find that the correct final cache key to pass to mc.get is
:1:django.contrib.sessions.cache1ay2kcv7axb3nu5fwnwoyf85wkwsttz9
which has the form <KEY_PREFIX>:<VERSION>:<KEY>.
Note: the final cache key can be changed by defining KEY_FUNCTION in the cache settings.

Multi-DB Transactions

Django Version 1.10.5 with Postgres 9.6.1
For the last year I've been working in a multi-schema default database environment. However things are beginning to grow to the point I've decided to split the single database into 3 databases.
I've got things working with a master/slave router for all 3 databases.
I am not using the 'default' database key. Instead I have 'db1', 'db2', and 'db3'
The part I am confused about is with transactions in this multi-database environment.
In this example it fails as expected. Caused of course by not using #transaction.atomic(using='db1') which is clear to me.
#transaction.atomic()
def edit(self, context):
"""Edit
:param dict context: Context
:return: None
"""
# Check if employee exists
try:
result = Passport.objects.get(pk=self.user.employee_id)
except Passport.DoesNotExist:
return False
result.name = context.get('name')
result.save()
However I have this strange example, simply because I'm trying to understand... I would have expected this to fail but it does not:
#transaction.atomic(using='db1')
def edit(self, context):
"""Edit
:param dict context: Context
:return: None
"""
# Check if employee exists
try:
result = Passport.objects.get(pk=self.user.employee_id)
except Passport.DoesNotExist:
return False
result.name = context.get('name')
with transaction.atomic(using='db2'):
result.save()
The model Passport does not exist in DB2 models at all.
My router is setup so that all writes go to each respected DB.
So what is the purpose of setting the using='db1' in the atomic transaction? I've looked at the source and I see it defaults to default when not "using".
In the above example I even made another transaction inside of the initial transaction but this time using='db2' where the model doesn't even exist. I figured that would have failed, but it didn't and the data was written to the proper database.
I bring this up because there will be situations where I need to interact with all 3 databases and if a single problem occurs when writing to all 3 databases, all 3 need to be rolled back or if on success of everything, then committed of course.
Perhaps someone can help break this down for me so I can understand?
You're interpreting transaction.atomic(using='X') to mean: run the following database commands on X, inside a transaction.
In fact, it just means: open a transaction on database X, and then either commit it or roll it back at the end of the block.
Or, as the documentation puts it:
Under the hood, Django’s transaction management code:
opens a transaction when entering the outermost atomic block;
commits or rolls back the transaction when exiting the outermost block.
The question of which database to use for a given command is determined by your router, not the using clause. So your transaction.atomic(using='db2') block is pointless (it will simply open a transaction on db2 and then close it), but not an error.

Django Redis cache values

I have set the value to Redis server externally using python script.
r = redis.StrictRedis(host='localhost', port=6379, db=1)
r.set('foo', 'bar')
And tried to get the value from web request using django cache inside views.py.
from django.core.cache import cache
val = cache.get("foo")
It is returning None. But when I tries to get it form
from django_redis import get_redis_connection
con = get_redis_connection("default")
val = con.get("foo")
It is returning the correct value 'bar'. How cache and direct connections are working ?
Libraries usually use several internal prefixes to store keys in redis, in order not to be mistaken with user defined keys.
For example, django-redis-cache, prepends a ":1:" to every key you save into it.
So for example when you do r.set('foo', 'bar'), it sets the key to, ":1:foo". Since you don't know the prefix prepended to your key, you can't get the key using a normal get, you have to use it's own API to get.
r.set('foo', 'bar')
r.get('foo') # None
r.get(':1:foo') # bar
So in the end, it returns to the library you use, go read the code for it and see how it exactly saves the keys. redis-cli can be your valuable friend here. Basically set a key with cache.set('foo', 'bar'), and go into redis-cli and check with 'keys *' command to see what key was set for foo.

Django: Getting Django-cron Running

I am trying to get Django-cron running, but it seems to only run after hitting the site once. I am using Virtualenv.
Any ideas why it only runs once?
On my PATH, I added the location of django_cron: '/Users/emilepetrone/Workspace/zapgeo/zapgeo/django_cron'
My cron.py file within my Django app:
from django_cron import cronScheduler, Job
from products.views import index
class GetProducts(Job):
run_every = 60
def job(self):
index()
cronScheduler.register(GetProducts)
class GetLocation(Job):
run_every = 60
def job(self):
index()
cronScheduler.register(GetLocation)
The first possible reason
There is a variable in django_cron/base.py:
# how often to check if jobs are ready to be run (in seconds)
# in reality if you have a multithreaded server, it may get checked
# more often that this number suggests, so keep an eye on it...
# default value: 300 seconds == 5 min
polling_frequency = getattr(settings, "CRON_POLLING_FREQUENCY", 300)
So, the minimal interval of checking for time to start your task is polling_frequency. You can change it by setting in settings.py of your project:
CRON_POLLING_FREQUENCY = 100 # use your custom value in seconds here
To start a job hit your server at least one time after starting Django web server.
The second possible reason
Your job has an error and it is not queued (queued flag is set to 'f' if your job raises an exception). In this case it stores in field 'queued' of table 'django_cron_job' string value 'f'. You can test it making the query:
select queued from django_cron_job;
If you change the code of your job the field may stay as 'f'. So, if you correct the error of your job you should manually set in queued field: 't'. Or the flag executing in the table django_cron_cron is 't'. It means that your app. server was stopped when your task was in progress. In this case you should manually set it into 'f'.