Django: singleton per request? - django

We have a wrapper around a suds (SOAP) request, that we use like this throughout our app:
from app.wrapper import ByDesign
bd = ByDesign()
Unfortunately, this instantiation is made at several points per request, causing suds to redownload the WSDL file, and I think we could save some time by making bd = ByDesign() return a singleton.
Since suds is not threadsafe, it'd have to be a singleton per request.
The only catch is, I'd like to make it so I don't have to change any code other than the app.wrapper.ByDesign class, so that I don't have to change any code that calls it. If there wasn't the 'singleton per request' requirement, I'd do something like this:
class ByDesignRenamed(object):
pass
_BD_INSTANCE = None
def ByDesign():
global _BD_INSTANCE
if not _BD_INSTANCE:
_BD_INSTANCE = ByDesignRenamed()
return _BD_INSTANCE
But, this won't work in a threaded server environment. Any ideas for me?

Check out threading.local(), which is somewhere between pure evil and the only way to get things going. It should probably be something like this:
import threading
_local = threading.local()
def ByDesign():
if 'bd' not in _local.__dict__:
_local.bd = ByDesignRenamed()
return _local.bd
Further reading:
Why is using thread locals in Django bad?
Thread locals in Python - negatives, with regards to scalability?

Related

Dataflow breaks using TaggedOutputs, "can't pickle WeakDictionary"

we are trying to deploy an Streaming pipeline to Dataflow where we separate in few different "routes" that we manipulate differently the data.
We did the complete development with the DirectRunner, and works smoothly as we tested but now, that we did deployed it to Dataflow, it does not work.
The code fails when yielding on the following doFn
class SplitByRoute(beam.DoFn):
OUTPUT_TAG_ROUTE_ONE= "route_one"
OUTPUT_TAG_ROUTE_TWO = "route_two"
OUTPUT_NOT_SUPPORTED = "not_supported"
def __init__(self):
beam.DoFn.__init__(self)
def process(self, elem):
try:
route = self.define_route(elem["param"]) # Just tag it depending on param
except Exception:
route = None
logging.info(f"Routed to {route}")
if route == self.OUTPUT_TAG_ROUTE_ONE:
yield TaggedOutput(self.OUTPUT_TAG_ROUTE_ONE, elem)
elif route == self.OUTPUT_TAG_ROUTE_TWO:
logging.info(f"Element: {elem}")
yield TaggedOutput(self.OUTPUT_TAG_ROUTE_TWO, elem)
else:
yield TaggedOutput(self.OUTPUT_NOT_SUPPORTED, elem)
It does log the element, yield the output and fails with the following error
AttributeError: Can't pickle local object 'WeakValueDictionary.__init__.<locals>.remove' [while running 'generatedPtransform-3196']
Other considerations are that we use taggedOutputs on the pipeline before this DoFn, and it works on Dataflow but this one in particularly fails with the error mentioned. Could it be the memory cache? or something related to it? Where Weakrefs are used?
Far as I know, this error happens when you have a class inside another one. Maybe not(?)
Any suggestions so how we could manage this? It's been very frustrating error.
Thank you!!! :)
We found the error
As you might know, apache-beam uses dill package to serialize the data between the modules. This let us pickle an instance of a object and send it through the pipeline.
The problem was that in self.define_route(elem["param"]), we used that instance of the class and we modified one of it's attributes. As the answer from Samuel Romero says, you can pickle a class, but I didn't really know (and probably someone has to) that if you modify the class instance it can not be pickle again. that's an strage behaviour, I know, so I opened an issue on BEAM https://issues.apache.org/jira/browse/BEAM-10384 if you want to check it out.
I will probably get into it (to understand better the problem) soon or later, but if someone had the same error, the workaround, as I mentioned is to do not modify the instance of a class beeing serialized.
Thanks to anyone who tried to help!
As you can read here, Python uses the pickle library for data serialization and it is subject to its limitations. Data serialization is the way processes transfer data between them since they do not share memory space.
Here I found a suggestion about using a fork of multiprocessing module that uses the dill package instead of pickle. This fork is part of the pathos framework (as is the dill package too) and is now called pathos.multiprocess and not pathos.multiprocessing as seen in the reference I mentioned previously.

Reinitialize flask extensions

I am using flask. I have a few routes defined that are expensive as they need to access a database and do lengthy computations. The database connectivity is relying on the Flask-Mongoengine extension which relies on PyMongo which is not threadsafe.
Hence my thoughts are as follows:
#blueprint.route("/refresh/data", methods=['GET'])
def refresh_data():
cache.clear()
with Pool(4) as p:
print(p.map(func=f, iterable=["recently", "mtd", "ytd", "sector"]))
Get a small pool and call the function f. The function f is based on
def f(name):
print(current_app.extensions)
print(current_app.config)
current_app.extensions["mongoengine"] = MongoEngine(app=current_app)
print(current_app.extensions)
get(address="reports/{path}/json".format(path=name))
get(address="reports/{path}/html".format(path=name))
return name
The problem here is that one cannot init_app the MongoEngine again. In fact, extensions can be initialize only once but what happens it the extensions is needed on multiple threads and is not threadsafe?

Store objects on the Flask object

I need to store an object on my flask.Flask instance. The naive approach would be just assigning an attribute like this.
app = flask.Flask(__name__)
app.my_object = MyObject()
I'm planning on referencing it later in an application context like this:
flask.current_app.my_object
I doubt this method is thread save though. Is there a correct method to do this that is encouraged by Flask? If not, how would you safely implement the approach above?
I ended up with using the config object.
app.config['MY_OBJECT'] = MyObject()
You can reference it like this in a request.
current_app.config['MY_OBJECT']

Django: post_save signal and request object

Here two of my model classes:
class DashboardVersion(models.Model):
name = models.CharField(_("Dashboard name"),max_length=100)
description = models.TextField(_("Description/Comment"),null=True,blank=True)
modifier = models.ForeignKey(User,editable=False,related_name="%(app_label)s_%(class)s_modifier_related")
modified = models.DateField(editable=False)
class Goal(models.Model):
goal = models.TextField(_("Goal"))
display_order = models.IntegerField(default=99999)
dashboard_version = models.ForeignKey(DashboardVersion)
When a Goal is edited, added, deleted, etc., I want to change the DashboardVersion.modifier to the user who modified it and the DashboardVersion.modifed to the current date.
I am trying to implement this using signals. It seems though, that the post_save signal does not contain the request. Or can I get it from somewhere or do I have to create my own signal?
Or, should I do something completely different?
Thanks! :-)
Eric
I'd say the most straightforward thing to do would be to just update the DashboardVersion in the view that processes the Goal update. If you have multiple views in the same module that handle Goal updates, you could factor out the DashboardVersion update logic into a separate function.
If you're dead set on using signals, you could probably hack something together with a thread locals middleware, but I'd say the simplest approach is usually best.

Multiprogramming in Django, writing to the Database

Introduction
I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model:
class BookProfile():
# ...
def save(self, *args, **kwargs):
uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection}
# Test for other objects with identical values
profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk))
# If none are found create the object, else fail.
if len(profiles) == 0:
super(BookProfile, self).save(*args, **kwargs)
else:
raise ValidationError('A Book Profile for that book instance in that collection already exists')
I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk).
I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors.
Problem
Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second.
I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless.
My Thoughts
The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects.
It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors.
If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?
Based on what you've described, it seems reasonable to assume that multiple Apache processes are a source of problems. Are you able to replicate if you limit Apache to a single worker process?
Maybe the suggestions in this thread will help: How to lock a critical section in Django?
An alternative approach could be utilizing a queue. You'd just stick your objects to be saved into the queue and have another process doing the actual save. That way you could guarantee that objects were processed sequentially. This wouldn't work well if your application depends on having the object saved by the time the response is returned unless you also had the request processes wait on the result (watching a finished queue for example).
Updated
You may find this info useful. Mr. Dumpleton does a much better job of laying out the considerations than I could attempt to summarize here:
http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines especially the Defining Process Groups section.
http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide Delegation to Daemon Process section
http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango
Find the section of text toward the bottom of the page that begins with:
Now, traditional wisdom in respect of
Django has been that it should
perferably only be used on single
threaded servers. This would mean for
Apache using the single threaded
'prefork' MPM on UNIX systems and
avoiding the multithreaded 'worker'
MPM.
and read until the end of the page.
I have found a solution that I think might work:
import threading
def save(self, *args, **kwargs):
lock = threading.Lock()
lock.acquire()
try:
# Test and Set Code
finally:
lock.release()
It doesn't seam to break the save method like that decorator and thus far I have not seen the error again.
Unless anyone can say that this is not a correct solution, I think this works.
Update
The accepted answer was the inspiration for this change.
I seams I was under the impressions that locks were some sort of special voodoo that were exempt by normal logic. Here the lock = threading.Lock() is run each time, thus instantiating a new unlocked lock which may always be acquired.
I needed a single central lock for the purpose, but were could that go unless I had a thread running all the time holding the lock? The answer seamed to be to use file locks explained in this answer to the StackOverflow question mentioned in the accepted answer.
The following is that solution modified to suit my situation:
The Code
Th following is my modified DjangoLock. I wished to keep locks relative to the Django root, to do this I put a custom variable into the settings.py file.
# locks.py
import os
import fcntl
from django.conf import settings
class DjangoLock:
def __init__(self, filename):
self.filename = os.path.join(settings.LOCK_DIR, filename)
self.handle = open(self.filename, 'w')
def acquire(self):
fcntl.flock(self.handle, fcntl.LOCK_EX)
def release(self):
fcntl.flock(self.handle, fcntl.LOCK_UN)
def __del__(self):
self.handle.close()
And now the additional LOCK_DIR settings variable:
# settings.py
import os
PATH = os.path.abspath(os.path.dirname(__file__))
# ...
LOCK_DIR = os.path.join(PATH, 'locks')
That will now put locks in a folder named locks relative to the root of the Django project. Just make sure you give apache write access, in my case:
sudo chown www-data locks
And finally the usage is much the same as before:
import locks
def save(self, *args, **kwargs):
lock = locks.DjangoLock('ClassName')
lock.acquire()
try:
# Test and Set Code
finally:
lock.release()
This is now the implementation I am using and it seams to be working really well. Thanks to all who have contributed to the process of arriving at this end.
You need to use synchronization on the save method. I haven't tried this yet, but here's a decorator that can be used to do so.