So I have these three models:
class Database(models.Model):
fields...
class DatabaseFieldsToCheck(models.Model):
data_base = models.ForeignKey(Database, on_delete=models.CASCADE, related_name="sql_requests_to_check")
class Result(models.Model):
result = models.ForeignKey(DatabaseFieldsToCheck, on_delete=models.CASCADE, related_name="results")
so my relation looks like this
Database ---many---> DatabaseFieldsToCheck ---many--> result
So in my view i want to get for each database only last 10 results.
how can i do it?
should i try raw sql ? or mby write some data transfer objects ?
in you model paste afield by which you can order acc to time for that first paste this field all in your model of which you want to fetch result
created = models.DateTimeField(default=timezone.now,editable=False)
then in your views
def viewname(request):
result = Result.objects.order_by('-created')[:10]
return render(request, 'home.html',{'result':result})
In a model like the one below
class Watched(Stamping):
user = models.ForeignKey("User", null=True, blank=True, on_delete=models.CASCADE)
count = models.PositiveIntegerField(default=0)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
Anytime an object is retrieved, I increment the count attribute.
Now my problem is how to get the number of times an object was retrieved for each day of the week
For example, WatchedObject1 will have {'Sun': 10, 'Tue': 70, 'Wed': 35}
This seems like a use case for auditing and there are plugins for Django that can help you with that. If you don't want to add this dependency you would have to create another model that you store your intended data.
class RetrievalOfData(models.Model):
date_of_retrieval = models.datetimefield(auto_now_add=True)
object_retrieved = models.ForeignKey("Watched")
You could probably also override the manager to create these objects everytime the model is queried: https://docs.djangoproject.com/en/3.2/topics/db/managers/
You might find it better to have a separate WatchedModelStats table, and perhaps link it you your model with Django signals. Whenever a countable event takes place, execute something like
try:
counter = WatchedModelStats.objects.get( name=model_name, date=today)
counter.count += 1
except WatchedModelStats.DoesNotExist:
counter = WatchedModelStats( name=model_name, date=today, count=1 )
counter.save()
One advantage is extensibility. You could easily implement multiple counts for differerent event types, if the need later becomes apparent.
I am running into a database issue in my unit tests. I think it has something to do with the way I am using TestCase and setUpData.
When I try to set up my test data with certain values, the tests throw the following error:
django.db.utils.IntegrityError: duplicate key value violates unique constraint
...
psycopg2.IntegrityError: duplicate key value violates unique constraint "InventoryLogs_productgroup_product_name_48ec6f8d_uniq"
DETAIL: Key (product_name)=(Almonds) already exists.
I changed all of my primary keys and it seems to be running fine. It doesn't seem to affect any of the tests.
However, I'm concerned that I am doing something wrong. When it first happened, I reversed about an hour's worth of work on my app (not that much code for a noob), which corrected the problem.
Then when I wrote the changes back in, the same issue presented itself again. TestCase is pasted below. The issue seems to occur after I add the sortrecord items, but corresponds with the items above it.
I don't want to keep going through and changing primary keys and urls in my tests, so if anyone sees something wrong with the way I am using this, please help me out. Thanks!
TestCase
class DetailsPageTest(TestCase):
#classmethod
def setUpTestData(cls):
cls.product1 = ProductGroup.objects.create(
product_name="Almonds"
)
cls.variety1 = Variety.objects.create(
product_group = cls.product1,
variety_name = "non pareil",
husked = False,
finished = False,
)
cls.supplier1 = Supplier.objects.create(
company_name = "Acme",
company_location = "Acme Acres",
contact_info = "Call me!"
)
cls.shipment1 = Purchase.objects.create(
tag=9,
shipment_id=9999,
supplier_id = cls.supplier1,
purchase_date='2015-01-09',
purchase_price=9.99,
product_name=cls.variety1,
pieces=99,
kgs=999,
crackout_estimate=99.9
)
cls.shipment2 = Purchase.objects.create(
tag=8,
shipment_id=8888,
supplier_id=cls.supplier1,
purchase_date='2015-01-08',
purchase_price=8.88,
product_name=cls.variety1,
pieces=88,
kgs=888,
crackout_estimate=88.8
)
cls.shipment3 = Purchase.objects.create(
tag=7,
shipment_id=7777,
supplier_id=cls.supplier1,
purchase_date='2014-01-07',
purchase_price=7.77,
product_name=cls.variety1,
pieces=77,
kgs=777,
crackout_estimate=77.7
)
cls.sortrecord1 = SortingRecords.objects.create(
tag=cls.shipment1,
date="2015-02-05",
bags_sorted=20,
turnout=199,
)
cls.sortrecord2 = SortingRecords.objects.create(
tag=cls.shipment1,
date="2015-02-07",
bags_sorted=40,
turnout=399,
)
cls.sortrecord3 = SortingRecords.objects.create(
tag=cls.shipment1,
date='2015-02-09',
bags_sorted=30,
turnout=299,
)
Models
from datetime import datetime
from django.db import models
from django.db.models import Q
class ProductGroup(models.Model):
product_name = models.CharField(max_length=140, primary_key=True)
def __str__(self):
return self.product_name
class Meta:
verbose_name = "Product"
class Supplier(models.Model):
company_name = models.CharField(max_length=45)
company_location = models.CharField(max_length=45)
contact_info = models.CharField(max_length=256)
class Meta:
ordering = ["company_name"]
def __str__(self):
return self.company_name
class Variety(models.Model):
product_group = models.ForeignKey(ProductGroup)
variety_name = models.CharField(max_length=140)
husked = models.BooleanField()
finished = models.BooleanField()
description = models.CharField(max_length=500, blank=True)
class Meta:
ordering = ["product_group_id"]
verbose_name_plural = "Varieties"
def __str__(self):
return self.variety_name
class PurchaseYears(models.Manager):
def purchase_years_list(self):
unique_years = Purchase.objects.dates('purchase_date', 'year')
results_list = []
for p in unique_years:
results_list.append(p.year)
return results_list
class Purchase(models.Model):
tag = models.IntegerField(primary_key=True)
product_name = models.ForeignKey(Variety, related_name='purchases')
shipment_id = models.CharField(max_length=24)
supplier_id = models.ForeignKey(Supplier)
purchase_date = models.DateField()
estimated_delivery = models.DateField(null=True, blank=True)
purchase_price = models.DecimalField(max_digits=6, decimal_places=3)
pieces = models.IntegerField()
kgs = models.IntegerField()
crackout_estimate = models.DecimalField(max_digits=6,decimal_places=3, null=True)
crackout_actual = models.DecimalField(max_digits=6,decimal_places=3, null=True)
objects = models.Manager()
purchase_years = PurchaseYears()
# Keep manager as "objects" in case admin, etc. needs it. Filter can be called like so:
# Purchase.objects.purchase_years_list()
# Managers in docs: https://docs.djangoproject.com/en/1.8/intro/tutorial01/
class Meta:
ordering = ["purchase_date"]
def __str__(self):
return self.shipment_id
def _weight_conversion(self):
return round(self.kgs * 2.20462)
lbs = property(_weight_conversion)
class SortingModelsBagsCalulator(models.Manager):
def total_sorted(self, record_date, current_set):
sorted = [SortingRecords['bags_sorted'] for SortingRecords in current_set if
SortingRecords['date'] <= record_date]
return sum(sorted)
class SortingRecords(models.Model):
tag = models.ForeignKey(Purchase, related_name='sorting_record')
date = models.DateField()
bags_sorted = models.IntegerField()
turnout = models.IntegerField()
objects = models.Manager()
def __str__(self):
return "%s [%s]" % (self.date, self.tag.tag)
class Meta:
ordering = ["date"]
verbose_name_plural = "Sorting Records"
def _calculate_kgs_sorted(self):
kg_per_bag = self.tag.kgs / self.tag.pieces
kgs_sorted = kg_per_bag * self.bags_sorted
return (round(kgs_sorted, 2))
kgs_sorted = property(_calculate_kgs_sorted)
def _byproduct(self):
waste = self.kgs_sorted - self.turnout
return (round(waste, 2))
byproduct = property(_byproduct)
def _bags_remaining(self):
current_set = SortingRecords.objects.values().filter(~Q(id=self.id), tag=self.tag)
sorted = [SortingRecords['bags_sorted'] for SortingRecords in current_set if
SortingRecords['date'] <= self.date]
remaining = self.tag.pieces - sum(sorted) - self.bags_sorted
return remaining
bags_remaining = property(_bags_remaining)
EDIT
It also fails with integers, like so.
django.db.utils.IntegrityError: duplicate key value violates unique constraint "InventoryLogs_purchase_pkey"
DETAIL: Key (tag)=(9) already exists.
UDPATE
So I should have mentioned this earlier, but I completely forgot. I have two unit test files that use the same data. Just for kicks, I matched a primary key in both instances of setUpTestData() to a different value and sure enough, I got the same error.
These two setups were working fine side-by-side before I added more data to one of them. Now, it appears that they need different values. I guess you can only get away with using repeat data for so long.
I continued to get this error without having any duplicate data but I was able to resolve the issue by initializing the object and calling the save() method rather than creating the object via Model.objects.create()
In other words, I did this:
#classmethod
def setUpTestData(cls):
cls.person = Person(first_name="Jane", last_name="Doe")
cls.person.save()
Instead of this:
#classmethod
def setUpTestData(cls):
cls.person = Person.objects.create(first_name="Jane", last_name="Doe")
I've been running into this issue sporadically for months now. I believe I just figured out the root cause and a couple solutions.
Summary
For whatever reason, it seems like the Django test case base classes aren't removing the database records created by let's just call it TestCase1 before running TestCase2. Which, in TestCase2 when it tries to create records in the database using the same IDs as TestCase1 the database raises a DuplicateKey exception because those IDs already exists in the database. And even saying the magic word "please" won't help with database duplicate key errors.
Good news is, there are multiple ways to solve this problem! Here are a couple...
Solution 1
Make sure if you are overriding the class method tearDownClass that you call super().tearDownClass(). If you override tearDownClass() without calling its super, it will in turn never call TransactionTestCase._post_teardown() nor TransactionTestCase._fixture_teardown(). Quoting from the doc string in TransactionTestCase._post_teardown()`:
def _post_teardown(self):
"""
Perform post-test things:
* Flush the contents of the database to leave a clean slate. If the
class has an 'available_apps' attribute, don't fire post_migrate.
* Force-close the connection so the next test gets a clean cursor.
"""
If TestCase.tearDownClass() is not called via super() then the database is not reset in between test cases and you will get the dreaded duplicate key exception.
Solution 2
Override TransactionTestCase and set the class variable serialized_rollback = True, like this:
class MyTestCase(TransactionTestCase):
fixtures = ['test-data.json']
serialized_rollback = True
def test_name_goes_here(self):
pass
Quoting from the source:
class TransactionTestCase(SimpleTestCase):
...
# If transactions aren't available, Django will serialize the database
# contents into a fixture during setup and flush and reload them
# during teardown (as flush does not restore data from migrations).
# This can be slow; this flag allows enabling on a per-case basis.
serialized_rollback = False
When serialized_rollback is set to True, Django test runner rolls back any transactions inserted into the database beween test cases. And batta bing, batta bang... no more duplicate key errors!
Conclusion
There are probably many more ways to implement a solution for the OP's issue, but these two should work nicely. Would definitely love to have more solutions added by others for clarity sake and a deeper understanding of the underlying Django test case base classes. Phew, say that last line real fast three times and you could win a pony!
The log you provided states DETAIL: Key (product_name)=(Almonds) already exists. Did you verify in your db?
To prevent such errors in the future, you should prefix all your test data string by test_
I discovered the issue, as noted at the bottom of the question.
From what I can tell, the database didn't like me using duplicate data in the setUpTestData() methods of two different tests. Changing the primary key values in the second test corrected the problem.
I think the problem here is that you had a tearDownClass method in your TestCase without the call to super method.
In this way the django TestCase lost the transactional functionalities behind the setUpTestData so it doesn't clean your test db after a TestCase is finished.
Check warning in django docs here:
https://docs.djangoproject.com/en/1.10/topics/testing/tools/#django.test.SimpleTestCase.allow_database_queries
I had similar problem that had been caused by providing the primary key value to a test case explicitly.
As discussed in the Django documentation, manually assigning a value to an auto-incrementing field doesn’t update the field’s sequence, which might later cause a conflict.
I have solved it by altering the sequence manually:
from django.db import connection
class MyTestCase(TestCase):
#classmethod
def setUpTestData(cls):
Model.objects.create(id=1)
with connection.cursor() as c:
c.execute(
"""
ALTER SEQUENCE "app_model_id_seq" RESTART WITH 2;
"""
)
I thought about my problem for days and i need a fresh view on this.
I am building a small application for a client for his deliveries.
# models.py - Clients app
class ClientPR(models.Model):
title = models.CharField(max_length=5,
choices=TITLE_LIST,
default='mr')
last_name = models.CharField(max_length=65)
first_name = models.CharField(max_length=65, verbose_name='Prénom')
frequency = WeekdayField(default=[]) # Return a CommaSeparatedIntegerField from 0 for Monday to 6 for Sunday...
[...]
# models.py - Delivery app
class Truck(models.Model):
name = models.CharField(max_length=40, verbose_name='Nom')
description = models.CharField(max_length=250, blank=True)
color = models.CharField(max_length=10,
choices=COLORS,
default='green',
unique=True,
verbose_name='Couleur Associée')
class Order(models.Model):
delivery = models.ForeignKey(OrderDelivery, verbose_name='Delivery')
client = models.ForeignKey(ClientPR)
order = models.PositiveSmallIntegerField()
class OrderDelivery(models.Model):
date = models.DateField(default=d.today())
truck = models.ForeignKey(Truck, verbose_name='Camion', unique_for_date="date")
So i was trying to get a query and i got this one :
ClientPR.objects.today().filter(order__delivery__date=date.today())
.order_by('order__delivery__truck', 'order__order')
But, i does not do what i really want.
I want to have a list of Client obj (query sets) group by truck and order by today's delivery order !
The thing is, i want to have EVERY clients for the day even if they are not in the delivery list and with filter, that cannot be it.
I can make a query with OrderDelivery model but i will only get the clients for the delivery, not all of them for the day...
Maybe i will need to do it with a Q object ? or even raw SQL ?
Maybe i have built my models relationships the wrong way ? Or i need to lower what i want to do... Well, for now, i need your help to see the problem with new eyes !
Thanks for those who will take some time to help me.
After some tests, i decided to go with 2 querys for one table.
One from OrderDelivery Queryset for getting a list of clients regroup by Trucks and another one from ClientPR Queryset for all the clients without a delivery set for them.
I that way, no problem !
So I'm trying to put together a webpage and I am currently have trouble putting together a results page for each user in the web application I am putting together.
Here are what my models look like:
class Fault(models.Model):
name = models.CharField(max_length=255)
severity = models.PositiveSmallIntegerField(default=0)
description = models.CharField(max_length=1024, null=False, blank=False)
recommendation = models.CharField(max_length=1024, null=False, blank=False)
date_added = models.DateTimeField(_('date added'), default=timezone.now)
...
class FaultInstance(models.Model):
auto = models.ForeignKey(Auto)
fault = models.ForeignKey(Fault)
date_added = models.DateTimeField(_('date added'), default=timezone.now)
objects = FaultInstanceManager()
...
class Auto(models.Model):
label = models.CharField(max_length=255)
model = models.CharField(max_length=255)
make = models.CharField(max_length=255)
year = models.IntegerField(max_length=4)
user = models.ForeignKey(AUTH_USER_MODEL)
...
I don't know if my model relationships are ideal, however it made sense it my head. So each user can have multiple Auto objects associated to them. And each Auto can have multiple FaultInstance objects associated to it.
In the results page, I want to list out the all the FaultInstances that a user has across their Autos. And under each listed FaultInstance I will have a list of all the autos that the user owns that has the fault, with its information (here is kind of what I had in mind).
All FaultInstance Listing Ordered by Severity (large number to low number)
FaultInstance:
FaultDescription:
FaultRecommendation:
ListofAutosWithFault:
AutoLabel AutoModel AutoYear ...
AutoLabel AutoModel AutoYear ...
Obviously, do things the correct way would mean that I want to do as much of the list creation in the Python/Django side of things and avoid doing any logic or processing in the template. I am able to create a list per severity with the a model manager as seen here:
class FaultInstanceManager(models.Manager):
def get_faults_by_user_severity(self, user, severity):
faults = defaultdict(list)
qs_faultinst = self.model.objects.select_related().filter(
auto__user=user, fault__severity=severity
).order_by('auto__make')
for result in qs_faultinst:
faults[result.fault].append(result)
faults.default_factory = None
return faults
I still need to specify each severity but I guess if I only have 5 severity levels, I can create a list for each severity level and pass each individual one to template. Any suggestions for this is appreciated. However, thats not my problem. My stopping point right now is that I want to create a summary table at the top of their report which can give the user breakdown of fault instances per make|model|year. I can't think of the proper query or data structure to pass on to the template.
Summary (table of all the FaultInstances with the following column headers):
FaultInstance Make|Model|Year NumberOfAutosAffected
This will let me know metrics for a make or a model or a year (in the example below, its separating faults based on model). I'm listing FaultInstances because I'm only listed Faults that a connected to a user.
For Example
Bad Starter Nissan 1
Bad Tailight Honda 2
Bad Tailight Nissan 1
And I am such a perfectionist that I want to do this while optimizing database queries. If I can create a data structure in my original query that will be easily parsed in template and still get both these sections in my report (maybe a defaultdict of a defaultdict(list)), thats what I want to do. Thanks for the help and hopefully my question is thorough and makes sense.
It makes sense to use related names because it simplifies your query. Like this:
class FaultInstance(models.Model):
auto = models.ForeignKey(Auto, related_name='fault_instances')
fault = models.ForeignKey(Fault, related_name='fault_instances')
...
class Auto(models.Model):
user = models.ForeignKey(AUTH_USER_MODEL, related_name='autos')
In this case you can use:
qs_faultinst = user.fault_instances.filter(fault__severity=severity).order_by('auto__make')
instead of:
qs_faultinst = self.model.objects.select_related().filter(
auto__user=user, fault__severity=severity
).order_by('auto__make')
I can't figure out your summary table, may be you meant:
Fault Make|Model|Year NumberOfAutosAffected
In this case you can use aggregation. But It (grouping) would still be slow if you have enough data. The one easy solution is just to denormalize data by creating extra model and create few signals to update it or you can use cache.
If you have a predefined set of severities then think about this:
class Fault(models.Model):
SEVERITY_LOW = 0
SEVERITY_MIDDLE = 1
SEVERITY_HIGH = 2
...
SEVERITY_CHOICES = (
(SEVERITY_LOW, 'Low'),
(SEVERITY_MIDDLE, 'Middle'),
(SEVERITY_HIGH, 'High'),
...
)
...
severity = models.PositiveSmallIntegerField(default=SEVERITY_LOW,
choices=SEVERITY_CHOICES)
...
In your templates you can just iterate through Fault.SEVERITY_CHOICES.
Update:
Change your models:
Аllocate model into a separate model:
class AutoModel(models.Model):
name = models.CharField(max_length=255)
Change the field model of model Auto :
class Auto(models.Model):
...
auto_model = models.ForeignKey(AutoModel, related_name='cars')
...
Add a model:
class MyDenormalizedModelForReport(models.Model):
fault = models.ForeignKey(Fault, related_name='reports')
auto_model = models.ForeignKey(AutoModel, related_name='reports')
year = models.IntegerField(max_length=4)
number_of_auto_affected = models.IntegerField(default=0)
Add a signal:
def update_denormalized_model(sender, instance, created, **kwargs):
if created:
rep, dummy_created = MyDenormalizedModelForReport.objects.get_or_create(fault=instance.fault, auto_model=instance.auto.auto_model, year=instance.auto.year)
rep.number_of_auto_affected += 1
rep.save()
post_save.connect(update_denormalized_model, sender=FaultInstance)