Here two of my model classes:
class DashboardVersion(models.Model):
name = models.CharField(_("Dashboard name"),max_length=100)
description = models.TextField(_("Description/Comment"),null=True,blank=True)
modifier = models.ForeignKey(User,editable=False,related_name="%(app_label)s_%(class)s_modifier_related")
modified = models.DateField(editable=False)
class Goal(models.Model):
goal = models.TextField(_("Goal"))
display_order = models.IntegerField(default=99999)
dashboard_version = models.ForeignKey(DashboardVersion)
When a Goal is edited, added, deleted, etc., I want to change the DashboardVersion.modifier to the user who modified it and the DashboardVersion.modifed to the current date.
I am trying to implement this using signals. It seems though, that the post_save signal does not contain the request. Or can I get it from somewhere or do I have to create my own signal?
Or, should I do something completely different?
Thanks! :-)
Eric
I'd say the most straightforward thing to do would be to just update the DashboardVersion in the view that processes the Goal update. If you have multiple views in the same module that handle Goal updates, you could factor out the DashboardVersion update logic into a separate function.
If you're dead set on using signals, you could probably hack something together with a thread locals middleware, but I'd say the simplest approach is usually best.
Related
I have an AppConfig.ready() implementation which depends on the readiness of an other application.
Is there a signal or method (which I could implement) which gets called after all application ready() methods have been called?
I know that django processes the signals in the order of INSTALLED_APPS.
But I don't want to enforce a particular ordering of INSTALLED_APPS.
Example:
INSTALLED_APPS=[
'app_a',
'app_b',
...
]
How can "app_a" receive a signal (or method call) after "app_b" processed AppConfig.ready()?
(reordering INSTALLED_APPS is not a solution)
I'm afraid the answer is No. Populating the application registry happens in django.setup(). If you look at the source code, you will see that neither apps.registry.Apps.populate() nor django.setup() dispatch any signals upon completion.
Here are some ideas:
You could dispatch a custom signal yourself, but that would require that you do that in all entry points of your Django project, e.g. manage.py, wsgi.py and any scripts that use django.setup().
You could connect to request_started and disconnect when your handler is called.
If you are initializing some kind of property, you could defer that initialization until the first access.
If any of these approaches work for you obviously depends on what exactly you are trying to achieve.
So there is a VERY hackish way to accomplish what you might want...
Inside the django.apps.registry is the singleton apps which is used by Django to populate the applications. See setup in django.__init__.py.
The way that apps.populate works is it uses a non-reentrant (thread-based) locking mechanism to only allow apps.populate to happen in an idempotent, thread-safe manner.
The stripped down source for the Apps class which is what the singleton apps is instantiated from:
class Apps(object):
def __init__(self, installed_apps=()):
# Lock for thread-safe population.
self._lock = threading.Lock()
def populate(self, installed_apps=None):
if self.ready:
return
with self._lock:
if self.ready:
return
for app_config in self.get_app_configs():
app_config.ready()
self.ready = True
With this knowledge, you could create some threading.Thread's that await on some condition. These consumer threads will utilize threading.Condition to send cross-thread signals (which will enforce your ordering problem). Here is a mocked out example of how that would work:
import threading
from django.apps import apps, AppConfig
# here we are using the "apps._lock" to synchronize our threads, which
# is the dirty little trick that makes this work
foo_ready = threading.Condition(apps._lock)
class FooAppConfig(AppConfig):
name = "foo"
def ready(self):
t = threading.Thread(name='Foo.ready', target=self._ready_foo, args=(foo_ready,))
t.daemon = True
t.start()
def _ready_foo(self, foo_ready):
with foo_ready:
# setup foo
foo_ready.notifyAll() # let everyone else waiting continue
class BarAppConfig(AppConfig):
name = "bar"
def ready(self):
t = threading.Thread(name='Bar.ready', target=self._ready_bar, args=(foo_ready,))
t.daemon = True
t.start()
def _ready_bar(self, foo_ready):
with foo_ready:
foo_ready.wait() # wait until foo is ready
# setup bar
Again, this ONLY allows you to control the flow of the ready calls from the individual AppConfig's. This doesn't control the order models get loaded, etc.
But if your first assertion was true, you have an app.ready implementation that depends on another app being ready first, this should do the trick.
Reasoning:
Why Conditions? The reason this uses threading.Condition over threading.Event is two-fold. Firstly, conditions are wrapped in a locking layer. This means that you will continue to operate under controlled circumstances if the need arises (accessing shared resources, etc). Secondly, because of this tight level of control, staying inside the threading.Condition's context will allow you to chain the configurations in some desirable ordering. You can see how that might be done with the following snippet:
lock = threading.Lock()
foo_ready = threading.Condition(lock)
bar_ready = threading.Condition(lock)
baz_ready = threading.Condition(lock)
Why Deamonic Threads? The reason for this is, if your Django application were to die sometime between acquiring and releasing the lock in apps.populate, the background threads would continue to spin waiting for the lock to release. Setting them to daemon-mode will allow the process to exit cleanly without needing to .join those threads.
You can add a dummy app which only purpose is to fire a custom all_apps_are_ready signal (or method call on AppConfig).
Put this app at the end of INSTALLED_APPS.
If this app receives the AppConfig.ready() method call, you know that all other apps are ready.
An alternative solution:
Subclass AppConfig and send a signal at the end of ready. Use this subclass in all your apps. If you have a dependency on one being loaded, hook up to that signal/sender pair.
If you need more details, don't hesitate!
There are some subtleties to this method:
1) Where to put the signal definition (I suspect in manage.py would work, or you could even monkey-patch django.setup to ensure it gets called everywhere that it is). You could put in a core app that is always the first one in installed_apps or somewhere where django will always load it before any AppConfigs are loaded.
2) Where to register the signal receiver (you should be able to do this in AppConfig.__init__ or possibly just globally in that file).
See https://docs.djangoproject.com/en/dev/ref/applications/#how-applications-are-loaded
Therefore, the setup is as follow:
When django first starts up, register the signal.
at the end of every app_config.ready send the signal (with the AppConfig instance as the sender)
in AppConfigs that need to respond to the signal, register a receiver in __init__ with the appropriate sender.
Let me know how it goes!
If you need it to work for third-party apps, keep in mind that you can override the AppConfigs for these apps (convention is to place these in a directory called apps). Alternatively, you could monkey patch AppConfig
I have a List (e.g. the output of a database query) variable, which I use to create actors (they could be many and they are varied). I use the following code (in TestedActor preStart()), the actor qualified name is from the List variable as an example):
Class<?> classobject = Class.forName("com.java.anything.actor.MyActor"); //create class from name string
ActorRef actref = getContext().actorOf(Props.create(classobject), actorname); //creation
the code was tested:
#Test
public void testPreStart() throws Exception {
final Props props = Props.create(TestedActor.class);
final TestActorRef<TestedActor > ref = TestActorRef.create(system, props, "testA");
#SuppressWarnings("unused")
final TestedActor actor = ref.underlyingActor();
}
EDIT : it is working fine (contrary to the previous post, where I have seen a timeout error, it turned out as an unrelated alarm).
I have googled some posts related to this issue (e.g. suggesting the usage of newInstance), however I am still confused as these were superseded by mentioning it as a bad pattern. So, I am looking for a solution in java, which is also safe from the akka point of view (or the confirmation of the above pattern).
Maybe if you would write us why you need to create those actors this way it would help to find the solution.
Actually most people will tell you that using reflection is not the best idea. Sometimes it's the only option but you should avoid it.
Maybe this would be a solution for you:
Since actors are really cheap you can create all of them upfront. How many of them do you have?
Now the query could return you a path to the actor, not the class. Select it with actorSelection and send messages to it.
If your actors does a long running job you can use a router or if you want to a Proxy Actor that will spawn other actors as needed. Other option is to create futures from a single actor.
It really depends on the case, because you may need to create multiple execution context's not to starve any of the actors (of futures).
In one of my applications i want to limit users to make a only a specific number of document conversion each calendar month and want to notify them of the conversions they've made and number of conversions they can still make in that calendar month.
So I do something like the following.
class CustomUser(models.Model):
# user fields here
def get_converted_docs(self):
return self.document_set.filter(date__range=[start, end]).count()
def remaining_docs(self):
converted = self.get_converted_docs()
return LIMIT - converted
Now, document conversion is done in the background using celery. So there may be a situation when a conversion task is pending, so in that case the above methods would let a user make an extra conversion, because the pending task is not being included in the count.
How can i get the number of tasks pending for a specific CustomUser object here ??
update
ok so i tried the following:
from celery.task.control import inspect
def get_scheduled_tasks():
tasks = []
scheduled = inspect().scheduled()
for task in scheduled.values()
tasks.extend(task)
return tasks
This gives me a list of scheduled tasks but now all the values are unicode for the above mentioned task args look like this:
u'args': u'(<Document: test_document.doc>, <CustomUser: Test User>)'
is there a way these can be decoded back to original django objects so that i can filter them ?
Store the state of your documents somewhere else, don't inspect your queue.
Either create a seperate model for that, or eg. have a state on your document model, at least independently from your queue. This should have several advantages:
Inspecting the queue might be expensive - also depending on the backend for that. And as you see it can also turn out to be difficult.
Your queue might not be persistent, if eg. your server crashes and use something like Redis you would loose this information, so it's a good thing to have a log somewhere else to be able to reconstruct the queue)
We have a wrapper around a suds (SOAP) request, that we use like this throughout our app:
from app.wrapper import ByDesign
bd = ByDesign()
Unfortunately, this instantiation is made at several points per request, causing suds to redownload the WSDL file, and I think we could save some time by making bd = ByDesign() return a singleton.
Since suds is not threadsafe, it'd have to be a singleton per request.
The only catch is, I'd like to make it so I don't have to change any code other than the app.wrapper.ByDesign class, so that I don't have to change any code that calls it. If there wasn't the 'singleton per request' requirement, I'd do something like this:
class ByDesignRenamed(object):
pass
_BD_INSTANCE = None
def ByDesign():
global _BD_INSTANCE
if not _BD_INSTANCE:
_BD_INSTANCE = ByDesignRenamed()
return _BD_INSTANCE
But, this won't work in a threaded server environment. Any ideas for me?
Check out threading.local(), which is somewhere between pure evil and the only way to get things going. It should probably be something like this:
import threading
_local = threading.local()
def ByDesign():
if 'bd' not in _local.__dict__:
_local.bd = ByDesignRenamed()
return _local.bd
Further reading:
Why is using thread locals in Django bad?
Thread locals in Python - negatives, with regards to scalability?
Introduction
I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model:
class BookProfile():
# ...
def save(self, *args, **kwargs):
uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection}
# Test for other objects with identical values
profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk))
# If none are found create the object, else fail.
if len(profiles) == 0:
super(BookProfile, self).save(*args, **kwargs)
else:
raise ValidationError('A Book Profile for that book instance in that collection already exists')
I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk).
I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors.
Problem
Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second.
I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless.
My Thoughts
The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects.
It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors.
If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?
Based on what you've described, it seems reasonable to assume that multiple Apache processes are a source of problems. Are you able to replicate if you limit Apache to a single worker process?
Maybe the suggestions in this thread will help: How to lock a critical section in Django?
An alternative approach could be utilizing a queue. You'd just stick your objects to be saved into the queue and have another process doing the actual save. That way you could guarantee that objects were processed sequentially. This wouldn't work well if your application depends on having the object saved by the time the response is returned unless you also had the request processes wait on the result (watching a finished queue for example).
Updated
You may find this info useful. Mr. Dumpleton does a much better job of laying out the considerations than I could attempt to summarize here:
http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines especially the Defining Process Groups section.
http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide Delegation to Daemon Process section
http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango
Find the section of text toward the bottom of the page that begins with:
Now, traditional wisdom in respect of
Django has been that it should
perferably only be used on single
threaded servers. This would mean for
Apache using the single threaded
'prefork' MPM on UNIX systems and
avoiding the multithreaded 'worker'
MPM.
and read until the end of the page.
I have found a solution that I think might work:
import threading
def save(self, *args, **kwargs):
lock = threading.Lock()
lock.acquire()
try:
# Test and Set Code
finally:
lock.release()
It doesn't seam to break the save method like that decorator and thus far I have not seen the error again.
Unless anyone can say that this is not a correct solution, I think this works.
Update
The accepted answer was the inspiration for this change.
I seams I was under the impressions that locks were some sort of special voodoo that were exempt by normal logic. Here the lock = threading.Lock() is run each time, thus instantiating a new unlocked lock which may always be acquired.
I needed a single central lock for the purpose, but were could that go unless I had a thread running all the time holding the lock? The answer seamed to be to use file locks explained in this answer to the StackOverflow question mentioned in the accepted answer.
The following is that solution modified to suit my situation:
The Code
Th following is my modified DjangoLock. I wished to keep locks relative to the Django root, to do this I put a custom variable into the settings.py file.
# locks.py
import os
import fcntl
from django.conf import settings
class DjangoLock:
def __init__(self, filename):
self.filename = os.path.join(settings.LOCK_DIR, filename)
self.handle = open(self.filename, 'w')
def acquire(self):
fcntl.flock(self.handle, fcntl.LOCK_EX)
def release(self):
fcntl.flock(self.handle, fcntl.LOCK_UN)
def __del__(self):
self.handle.close()
And now the additional LOCK_DIR settings variable:
# settings.py
import os
PATH = os.path.abspath(os.path.dirname(__file__))
# ...
LOCK_DIR = os.path.join(PATH, 'locks')
That will now put locks in a folder named locks relative to the root of the Django project. Just make sure you give apache write access, in my case:
sudo chown www-data locks
And finally the usage is much the same as before:
import locks
def save(self, *args, **kwargs):
lock = locks.DjangoLock('ClassName')
lock.acquire()
try:
# Test and Set Code
finally:
lock.release()
This is now the implementation I am using and it seams to be working really well. Thanks to all who have contributed to the process of arriving at this end.
You need to use synchronization on the save method. I haven't tried this yet, but here's a decorator that can be used to do so.