Greetings,
Assume I have such model:
class Foo(models.Model):
type = models.ForeignKey(Type)
start_time = models.DateTimeField()
end_time models.DateTimeField()
For each Foo object that is having the same type, I need this time interval (end_time - start_time) to be unique so that creation of a second Foo with a clashing interval won't be possible. How can this be achieved ?
See the documentation about custom validation in the admin interface.
Basically you have to create your own (model) form, lets say CustomFooAdminForm and assign it to the admin model:
class FooAdmin(admin.ModelAdmin):
form = CustomFooAdminForm
and in the form you can have something like (see custom validation in forms):
# more or less pseudo code
class CustomFooAdminForm(forms.ModelForm):
def clean(self):
cleaned_data = super(CustomFooAdminForm, self).clean()
interval = cleaned_data.get("end_time") - cleaned_data.get("start_time")
type = cleaned_data.get("type")
q = Foo.objects.extra(select={'interval':'time_end - time_start'}
counter = q.filter(interval=intervak, type=type).count()
if counter > 0:
raise forms.ValidationError("ERROR!!!!")
# Always return the full collection of cleaned data.
return cleaned_data
Maybe you have to transform the DateTimeFields to UNIX timestamps, before you can subtract them in SQL (UNIX_TIMESTAMP(time_end) - UNIX_TIMESTAMP (time_start) for MySQL). Or you can use DATEDIFF() in MySQL to get the difference. But note that you tie your application to a certain database if you use such special functions (as long as they are not available in other databases under the same name).
Related
I have two models with an one to many relation.
One model named repairorder, which can have one or more instances of work that is performed on that order.
What I need is to annotate the Repairorder queryset to sum the cummulative Work duration. On the Work model I annotated the duration of a single Work instance based on the start and end date time stamps. Now I need to use this annotated field to sum the total cummulative Work that is performed for each order. I tried to extend the base model manager:
from django.db import models
class WorkManager(models.Manager):
def get_queryset(self):
return super(OrderholdManager, self).get_queryset().annotate(duration=ExpressionWrapper(Coalesce(F('enddate'), Now()) - F('startdate'), output_field=DurationField()))
class Work(models.Model):
#...
order_idorder = models.ForeignKey('Repairorder', models.DO_NOTHING)
startdate = models.DateTimeField()
enddate = models.DateTimeField()
objects = WorkManager()
class RepairorderManager(models.Manager):
def get_queryset(self):
return super(RepairorderexternalManager, self).get_queryset().annotate(totalwork=Sum('work__duration'), output_field=DurationField())
class Repairorder(models.Model):
#...
idrepairorder = models.autofield(primary_key=True)
objects = RepairorderManager()
For each Repairorder I want to display the 'totalwork', however this error appears: QuerySet.annotate() received non-expression(s): . and if I remove the output_field=DurationField() from the RepairorderMananager, it says: Cannot resolve keyword 'duration' into field.
Doing it the 'Python way' by using model properties is not an option with big datasets.
You will need to add the calculation to the RepairorderManager as well:
class RepairorderManager(models.Manager):
def get_queryset(self):
return super(RepairorderexternalManager, self).get_queryset().annotate(
totalwork=ExpressionWrapper(
Sum(Coalesce(F('work__enddate'), Now()) - F('work__startdate')),
output_field=DurationField()
)
)
Django does not take into account annotations introduced by manager on related objects.
I have the following model:
class MeasurementParameter(models.Model):
tolerance = models.FloatField()
set_value = models.FloatField()
tol_low = None
tol_high = None
def tolerance_band(self):
tol = self.set_value * self.tolerance/100
self.tol_high = self.set_value + tol
self.tol_low = self.set_value - tol
print self.tol_low
return self.tol_high, self.tol_low
I wish to set the calculated local variables tol_low and tol_high using the tolerance_band method.
The model is has a ManyToMany relationship with another model called Product.
class Product(models.Model):
name = models.CharField(max_length=100)
description = models.CharField(max_length=1000)
parameters = models.ManyToManyField(MeasurementParameter, related_name='measurement')
def calc_all_tol_bands(self):
for parameter in self.parameters.all():
hi, lo = parameter.tolerance_band()
def __str__(self):
return self.name
So in my view I attempt to calculate all tolerance bands by:
product.calc_all_tol_bands()
However if I try and get the local variables:
product.parameters.all()[0].tol_low
I get None all the time.
What do I need to do to be able to set calculated values in the MeasurementParameter model?
John.
This is expected behavior. When you evaluate
product.parameters.all()[0]
this means you make a database fetch. So Django will fetch the first of these parameters. Since the tol_low and tol_high are not persistent (not stored in the database), this means that it will fallback on the class attribute, which is None.
The calculations here, are rather simple, so I propose that you convert these to properties [Python-doc]:
class MeasurementParameter(models.Model):
tolerance = models.FloatField()
set_value = models.FloatField()
#property
def tol_low(self):
return self.set_value * (100-self.tolerance)/100
#property
def tol_high(self):
return self.set_value * (100+self.tolerance)/100
def tolerance_band(self):
return self.tol_high, self.tol_low
Here we thus will evaluate the property when necessary. This is more robust: if you change the tolerance of an object, or the set_value, then the tol_low and tol_high will be different for that object. So there is no complex code in place to update the value for relevant updates. The calc_all_bands is not necessary either, since calculations are simply done when requested.
Note that you can not use properties in Django ORM filters, etc. In that case, you can encode the property as a query expression and annotate the queryset with these.
I'm unable to find the difference between two dates in my form.
models.py:
class Testing(models.Model):
Planned_Start_Date = models.DateField()
Planned_End_Date = models.DateField()
Planned_Duration = models.IntegerField(default=Planned_Start_Date - Planned_End_Date)
difference between the date has to calculated and it should stored in the database but It doesn't works
default is a callable function that is just used on the class level, so you can't use it to do what you want. You should override the model's save() method (or better, implement a pre_save signal handler to populate the field just before the object is saved:
def save(self, **kwargs):
self.Planned_Duration = self.Planned_End_Date - self.Planned_Start_Date
super().save(**kwargs)
But why do you save a computed property to the database? This column is unnecessary. Both for querying (you can easily use computed queries on the start and end date) as for retrieving, you're wasting db space.
# if you need the duration just define a property
#property
def planned_duration(self):
return self.Planned_End_Date - self.Planned_Start_Date
# if you need to query tasks which last more than 2 days
Testing.objects.filter(Planned_End_Date__gt=F('Planned_Start_Date') + datetime.timedelta(days=2))
Note: Python conventions would recommend you name your fields using snake_case (planned_duration, planned_end_date, planned_start_date). Use CamelCase for classes (TestingTask). Don't mix the two.
I have this model:
class Task(models.Model):
class Meta:
unique_together = ("campaign_id", "task_start", "task_end", "task_day")
campaign_id = models.ForeignKey(Campaign, on_delete=models.DO_NOTHING)
playlist_id = models.ForeignKey(PlayList, on_delete=models.DO_NOTHING)
task_id = models.AutoField(primary_key=True, auto_created=True)
task_start = models.TimeField()
task_end = models.TimeField()
task_day = models.TextField()
I need to write a validation test that checks if a newly created task time range overlaps with an existing one in the database.
For example:
A task with and ID 1 already has a starting time at 5:00PM and ends at 5:15PM on a Saturday. A new task cannot be created between the first task's start and end time. Where should I write this test and what is the most efficent way to do this? I also use DjangoRestFramework Serializers.
When you receive the form data from the user, you can:
Check the fields are consistent: user task_start < user task_end, and warn the user if not.
Query (SELECT) the database to retrieve all existing tasks which intercept the user time,
Order the records by task_start (ORDER BY),
Select only records which validate your criterion, a.k.a.:
task_start <= user task_start <= task_end, or,
task_start <= user task_end <= task_end.
warn the user if at least one record is found.
Everything is OK:
Construct a Task instance,
Store it in database.
Return success.
Implementation details:
task_start and task_end could be indexed in your database to improve selection time.
I saw that you also have a task_day field (which is a TEXT).
You should really consider using UTC DATETIME fields instead of TEXT, because you need to compare date AND time (and not only time): consider a task which starts at 23:30 and finish at 00:45 the day after…
This is how I solved it. It's not optimal by far, but I'm limited to python 2.7 and Django 1.11 and I'm also a beginner.
def validate(self, data):
errors = {}
task_start = data.get('task_start')
task_end = data.get('task_end')
time_filter = Q(task_start__range=[task_start, task_end])
| Q(task_end__range=[task_start, task_end])
filter_check = Task.objects.filter(time_filter).exists()
if task_start > task_end:
errors['error'] = u'End time cannot be earlier than start time!'
raise serializers.ValidationError(errors)
elif filter_check:
errors['errors'] = u'Overlapping tasks'
raise serializers.ValidationError(errors)
else:
pass
return data
I'm working on a QuerySet class that does something similar to prefetch_related but allows the query to link data that's in an unconnected database (basically, linking records from django apps's database to records in a legacy system, using a shared unique key, something along the links of:
class UserFoo(models.Model):
''' Uses the django database & can link to User model '''
user = models.OneToOneField(User, related_name='userfoo')
foo_record = models.CharField(
max_length=32,
db_column="foo",
unique=True
) # uuid pointing to legacy db table
#property
def foo(self):
if not hasattr(self, '_foo'):
self._foo = Foo.objects.get(uuid=self.foo_record)
return self._foo
#foo.setter
def foo(self, foo_obj):
self._foo = foo_obj
and then
class Foo(models.Model):
'''Uses legacy database'''
id = models.AutoField(primary_key=True)
uuid = models.CharField(max_length=32) # uuid for Foo legacy db table
…
#property
def user(self):
if not hasattr(self, '_user'):
self._user = User.objects.get(userfoo__foo_record=self.uuid)
return self._user
#user.setter
def user(self, user_obj):
self._user = user_obj
Run normally, a query that matches 100 foos (each with, say, 1 user record) will end up requiring 101 queries: one to get the foos, and a hundred for each user record (by doing a look up for the user record by calling the user property on each food).
To get around this, I am making something similar to prefetch_related which pulls all of the matching records for a query by the key, which means I just need one additional query to get the remaining records.
My code looks something like this:
class FooWithUserQuerySet(models.query.QuerySet):
def with_foo(self):
qs = self._clone()
foo_idx = {}
for record in self.all():
foo_idx.setdefault(record.uuid, []).append(record)
users = User.objects.filter(
userfoo__foo_record__in=foo_idx.keys()
).select_related('django','relations','here')
user_idx = {}
for user in users:
user_idx[user.userfoo.foo_record] = user
for fid, frecords in foo_idx.items():
user = user_idx.get(fid)
for frecord in frecords:
if user:
setattr(frecord, 'user', user)
return qs
This works, but any extra data saved to a foo is lost if the query is later modified — that is, if the queryset is re-ordered or filtered in any way.
I would like a way to create a method that does exactly what I am doing now, but waits until the moment that adjusts whenever the query is evaluated, so that foo records always have a User record.
Some notes:
the example has been highly simplified. There are actually a lot of tables that link up to the legacy data, and so for example although there is a one-to-on relationship between Foo and User, there will be some cases where a queryset will have multiple Foo records with the same key.
the legacy database is on a different server and server platform, so I can't link the two tables using a database server itself
ideally I'd like the User data to be cached, so that even if the records are sorted or sliced I don't have to re-run the foo query a second time.
Basically, I don't know enough about the internals of how the lazy evaluation of querysets works in order to do the necessary coding. I have jumped back and forth on the source code for django.db.models.query but it really is a fairly dense read and I'm hoping someone out there who's worked with this already can offer some pointers.