I would like to allow a bunch of my users to run arbitrary and periodical queries.
So far, I've managed to create this model:
class BasicQuery(models.Model):
id = models.AutoField(primary_key=True)
target_entity = models.CharField(max_length=255)
target_field = models.CharField(max_length=255)
operator = models.CharField(max_length=10)
compare_value = models.CharField(max_length=10)
active = models.BooleanField()
run_every = models.ForeignKey(QueryFrequency)
and the function which executes it :
def rule_runner(rule):
target_entity = rule.target_entity
fieldname = rule.target_field
val = rule.compare_value
if rule.operator:
cmd = '{}.objects.filter({}__{}={})'.format(rule.target_entity,
rule.target_field,
rule.operator,
rule.compare_value)
else:
cmd = '{}.objects.filter({}={})'.format(rule.target_entity,
rule.target_field,
rule.compare_value)
return eval(cmd)
But because of the eval(cmd), the code is way too dangerous to go to production, no matter how many tests I'll do before calling this function I'll never feel safe about it. Any suggestions on how I can achieve this purpose in a clean way?
You could unpack a dict containing the values, so they can be used dynamically with the filter method. And about the rule.target_entity, you can use importlib.import_module(str) and getattr(module, str) to get it:
import importlib
def rule_runer(rule):
model = getattr(importlib.import_module('app.models'), 'Model')
if rule.operator:
field = rule.target_field
else:
field = '{}__{}'.format(rule.target_field, rule.operator)
return model.objects.filter(**{field: rule.compare_value})
To unpack a dict is to put ** before it. This can be used to dynamically set the function's arguments, so the dict keys will be the arguments names and the dict values will be these arguments values.
importlib.import_module will allow you to import any module within your PYTHONPATH, which usually includes all your django apps paths (so app.models shouldn't be a problem), by using a string.
This is definitely safer than using eval and you do have to check if the module's path or the model exists to avoid throwing a ModuleNotFoundError (ImportError if < Python 3.6) or AttributeError.
Related
I created a custom slug generator function to use in multiple models across multiple apps. To achieve this I pass the relevant django app name and django model as string parameters to the function, so the function can check if the slug is unique for that model:
def slug_generator(django_app, django_model, field_name, slug_len, prefix=""):
unique_pk = False
model = apps.get_model(django_app, django_model)
while not unique_pk:
random_pks = random.sample(
string.ascii_uppercase + string.digits + string.ascii_lowercase, slug_len)
new_pk = ''.join(random_pks)
search_query = {field_name: new_pk}
try:
if not model.objects.filter(**search_query).exists():
unique_pk = True
except:
if not model.objects.all().exists():
unique_pk = True
return prefix + new_pk
called in models.py like so:
class Form(models.Model):
form_id = models.CharField(
max_length=25, default=slug_generator('forms', 'Form', 'form_id', 25))
The function was working fine when I only was using it for a single model (so i wasn't using get_model, I just hard coded the model it was for). Since adding get_model, this error is thrown on startup django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.
I tried putting the following before my function:
import django
django.setup()
It only led me to a new error RuntimeError: populate() isn't reentrant, so it makes me think that there is a better way to write the function. Is what I am trying to achieve possible or should I just refactors the string generation portion out of the function and make a slug_generator function for each model with the model names hard coded in?
Realised it's because I am actually calling the function in the Form field by providing it parameters. I changed it to the following and got it running:
def get_slug():
return slug_generator('forms', 'Form', 'form_id', 25)
class Form(models.Model):
form_id = models.CharField(
max_length=25, default=get_slug)
not sure if there is a more elegant way to do it though
class PurchaseOrder(models.Model):
purchase_order_id = models.AutoField(primary_key=True)
purchase_order_number = models.CharField(unique=True)
vendor = models.ForeignKey(Vendor)
i am creating Purchase Order(po) table. when po created i have to update purchase_order_number as "PO0"+purchase_order_id ex PO0123 (123 is Primary key). so i am using def save in models to accomplish this
def save(self):
if self.purchase_order_id is not None:
self.purchase_order_number = "PO"+str(self.purchase_order_id)
return super(PurchaseOrder, self).save()
It is working fine with single creation but when i try to create bulk of data using locust(Testing tool) its giving an error duplicate entry for PurchseOrdernumber Can we modify field value in models itself some thing like this
purchase_order_number = models.CharField(unique=True,default=("PO"+self.purchase_order_id )
To be honest, I don't think it should work when you create multiple instances. Because as I can see from the code:
if self.purchase_order_id is not None:
self.purchase_order_number = "PO"+str(self.purchase_order_id)
Here purchase_order_id will be None when you are creating new instance. Also, until you call super(PurchaseOrder, self).save(), it will not generate purchase_order_id, meaning purchase_order_number will be empty.
So, what I would recommend is to not store this information in DB. Its basically the same as purchase_order_id with PO in front of it. Instead you can use a property method to get the same value. Like this:
class PurchaseOrder(models.Model):
purchase_order_id = models.AutoField(primary_key=True)
# need to remove `purchase_order_number = models.CharField(unique=True)`
...
#property
def purchase_order_number(self):
return "PO{}".format(self.purchase_order_id)
So, you can also see the purchase_order_number like this:
p = PurchaseOrder.objects.first()
p.purchase_order_number
Downside of this solution is that, you can't make any query on the property field. But I don't think it would be necessary anyway, because you can do the same query for the purchase_order_id, ie PurchaseOrder.objects.filter(purchase_order_id=1).
I am running into a database issue in my unit tests. I think it has something to do with the way I am using TestCase and setUpData.
When I try to set up my test data with certain values, the tests throw the following error:
django.db.utils.IntegrityError: duplicate key value violates unique constraint
...
psycopg2.IntegrityError: duplicate key value violates unique constraint "InventoryLogs_productgroup_product_name_48ec6f8d_uniq"
DETAIL: Key (product_name)=(Almonds) already exists.
I changed all of my primary keys and it seems to be running fine. It doesn't seem to affect any of the tests.
However, I'm concerned that I am doing something wrong. When it first happened, I reversed about an hour's worth of work on my app (not that much code for a noob), which corrected the problem.
Then when I wrote the changes back in, the same issue presented itself again. TestCase is pasted below. The issue seems to occur after I add the sortrecord items, but corresponds with the items above it.
I don't want to keep going through and changing primary keys and urls in my tests, so if anyone sees something wrong with the way I am using this, please help me out. Thanks!
TestCase
class DetailsPageTest(TestCase):
#classmethod
def setUpTestData(cls):
cls.product1 = ProductGroup.objects.create(
product_name="Almonds"
)
cls.variety1 = Variety.objects.create(
product_group = cls.product1,
variety_name = "non pareil",
husked = False,
finished = False,
)
cls.supplier1 = Supplier.objects.create(
company_name = "Acme",
company_location = "Acme Acres",
contact_info = "Call me!"
)
cls.shipment1 = Purchase.objects.create(
tag=9,
shipment_id=9999,
supplier_id = cls.supplier1,
purchase_date='2015-01-09',
purchase_price=9.99,
product_name=cls.variety1,
pieces=99,
kgs=999,
crackout_estimate=99.9
)
cls.shipment2 = Purchase.objects.create(
tag=8,
shipment_id=8888,
supplier_id=cls.supplier1,
purchase_date='2015-01-08',
purchase_price=8.88,
product_name=cls.variety1,
pieces=88,
kgs=888,
crackout_estimate=88.8
)
cls.shipment3 = Purchase.objects.create(
tag=7,
shipment_id=7777,
supplier_id=cls.supplier1,
purchase_date='2014-01-07',
purchase_price=7.77,
product_name=cls.variety1,
pieces=77,
kgs=777,
crackout_estimate=77.7
)
cls.sortrecord1 = SortingRecords.objects.create(
tag=cls.shipment1,
date="2015-02-05",
bags_sorted=20,
turnout=199,
)
cls.sortrecord2 = SortingRecords.objects.create(
tag=cls.shipment1,
date="2015-02-07",
bags_sorted=40,
turnout=399,
)
cls.sortrecord3 = SortingRecords.objects.create(
tag=cls.shipment1,
date='2015-02-09',
bags_sorted=30,
turnout=299,
)
Models
from datetime import datetime
from django.db import models
from django.db.models import Q
class ProductGroup(models.Model):
product_name = models.CharField(max_length=140, primary_key=True)
def __str__(self):
return self.product_name
class Meta:
verbose_name = "Product"
class Supplier(models.Model):
company_name = models.CharField(max_length=45)
company_location = models.CharField(max_length=45)
contact_info = models.CharField(max_length=256)
class Meta:
ordering = ["company_name"]
def __str__(self):
return self.company_name
class Variety(models.Model):
product_group = models.ForeignKey(ProductGroup)
variety_name = models.CharField(max_length=140)
husked = models.BooleanField()
finished = models.BooleanField()
description = models.CharField(max_length=500, blank=True)
class Meta:
ordering = ["product_group_id"]
verbose_name_plural = "Varieties"
def __str__(self):
return self.variety_name
class PurchaseYears(models.Manager):
def purchase_years_list(self):
unique_years = Purchase.objects.dates('purchase_date', 'year')
results_list = []
for p in unique_years:
results_list.append(p.year)
return results_list
class Purchase(models.Model):
tag = models.IntegerField(primary_key=True)
product_name = models.ForeignKey(Variety, related_name='purchases')
shipment_id = models.CharField(max_length=24)
supplier_id = models.ForeignKey(Supplier)
purchase_date = models.DateField()
estimated_delivery = models.DateField(null=True, blank=True)
purchase_price = models.DecimalField(max_digits=6, decimal_places=3)
pieces = models.IntegerField()
kgs = models.IntegerField()
crackout_estimate = models.DecimalField(max_digits=6,decimal_places=3, null=True)
crackout_actual = models.DecimalField(max_digits=6,decimal_places=3, null=True)
objects = models.Manager()
purchase_years = PurchaseYears()
# Keep manager as "objects" in case admin, etc. needs it. Filter can be called like so:
# Purchase.objects.purchase_years_list()
# Managers in docs: https://docs.djangoproject.com/en/1.8/intro/tutorial01/
class Meta:
ordering = ["purchase_date"]
def __str__(self):
return self.shipment_id
def _weight_conversion(self):
return round(self.kgs * 2.20462)
lbs = property(_weight_conversion)
class SortingModelsBagsCalulator(models.Manager):
def total_sorted(self, record_date, current_set):
sorted = [SortingRecords['bags_sorted'] for SortingRecords in current_set if
SortingRecords['date'] <= record_date]
return sum(sorted)
class SortingRecords(models.Model):
tag = models.ForeignKey(Purchase, related_name='sorting_record')
date = models.DateField()
bags_sorted = models.IntegerField()
turnout = models.IntegerField()
objects = models.Manager()
def __str__(self):
return "%s [%s]" % (self.date, self.tag.tag)
class Meta:
ordering = ["date"]
verbose_name_plural = "Sorting Records"
def _calculate_kgs_sorted(self):
kg_per_bag = self.tag.kgs / self.tag.pieces
kgs_sorted = kg_per_bag * self.bags_sorted
return (round(kgs_sorted, 2))
kgs_sorted = property(_calculate_kgs_sorted)
def _byproduct(self):
waste = self.kgs_sorted - self.turnout
return (round(waste, 2))
byproduct = property(_byproduct)
def _bags_remaining(self):
current_set = SortingRecords.objects.values().filter(~Q(id=self.id), tag=self.tag)
sorted = [SortingRecords['bags_sorted'] for SortingRecords in current_set if
SortingRecords['date'] <= self.date]
remaining = self.tag.pieces - sum(sorted) - self.bags_sorted
return remaining
bags_remaining = property(_bags_remaining)
EDIT
It also fails with integers, like so.
django.db.utils.IntegrityError: duplicate key value violates unique constraint "InventoryLogs_purchase_pkey"
DETAIL: Key (tag)=(9) already exists.
UDPATE
So I should have mentioned this earlier, but I completely forgot. I have two unit test files that use the same data. Just for kicks, I matched a primary key in both instances of setUpTestData() to a different value and sure enough, I got the same error.
These two setups were working fine side-by-side before I added more data to one of them. Now, it appears that they need different values. I guess you can only get away with using repeat data for so long.
I continued to get this error without having any duplicate data but I was able to resolve the issue by initializing the object and calling the save() method rather than creating the object via Model.objects.create()
In other words, I did this:
#classmethod
def setUpTestData(cls):
cls.person = Person(first_name="Jane", last_name="Doe")
cls.person.save()
Instead of this:
#classmethod
def setUpTestData(cls):
cls.person = Person.objects.create(first_name="Jane", last_name="Doe")
I've been running into this issue sporadically for months now. I believe I just figured out the root cause and a couple solutions.
Summary
For whatever reason, it seems like the Django test case base classes aren't removing the database records created by let's just call it TestCase1 before running TestCase2. Which, in TestCase2 when it tries to create records in the database using the same IDs as TestCase1 the database raises a DuplicateKey exception because those IDs already exists in the database. And even saying the magic word "please" won't help with database duplicate key errors.
Good news is, there are multiple ways to solve this problem! Here are a couple...
Solution 1
Make sure if you are overriding the class method tearDownClass that you call super().tearDownClass(). If you override tearDownClass() without calling its super, it will in turn never call TransactionTestCase._post_teardown() nor TransactionTestCase._fixture_teardown(). Quoting from the doc string in TransactionTestCase._post_teardown()`:
def _post_teardown(self):
"""
Perform post-test things:
* Flush the contents of the database to leave a clean slate. If the
class has an 'available_apps' attribute, don't fire post_migrate.
* Force-close the connection so the next test gets a clean cursor.
"""
If TestCase.tearDownClass() is not called via super() then the database is not reset in between test cases and you will get the dreaded duplicate key exception.
Solution 2
Override TransactionTestCase and set the class variable serialized_rollback = True, like this:
class MyTestCase(TransactionTestCase):
fixtures = ['test-data.json']
serialized_rollback = True
def test_name_goes_here(self):
pass
Quoting from the source:
class TransactionTestCase(SimpleTestCase):
...
# If transactions aren't available, Django will serialize the database
# contents into a fixture during setup and flush and reload them
# during teardown (as flush does not restore data from migrations).
# This can be slow; this flag allows enabling on a per-case basis.
serialized_rollback = False
When serialized_rollback is set to True, Django test runner rolls back any transactions inserted into the database beween test cases. And batta bing, batta bang... no more duplicate key errors!
Conclusion
There are probably many more ways to implement a solution for the OP's issue, but these two should work nicely. Would definitely love to have more solutions added by others for clarity sake and a deeper understanding of the underlying Django test case base classes. Phew, say that last line real fast three times and you could win a pony!
The log you provided states DETAIL: Key (product_name)=(Almonds) already exists. Did you verify in your db?
To prevent such errors in the future, you should prefix all your test data string by test_
I discovered the issue, as noted at the bottom of the question.
From what I can tell, the database didn't like me using duplicate data in the setUpTestData() methods of two different tests. Changing the primary key values in the second test corrected the problem.
I think the problem here is that you had a tearDownClass method in your TestCase without the call to super method.
In this way the django TestCase lost the transactional functionalities behind the setUpTestData so it doesn't clean your test db after a TestCase is finished.
Check warning in django docs here:
https://docs.djangoproject.com/en/1.10/topics/testing/tools/#django.test.SimpleTestCase.allow_database_queries
I had similar problem that had been caused by providing the primary key value to a test case explicitly.
As discussed in the Django documentation, manually assigning a value to an auto-incrementing field doesn’t update the field’s sequence, which might later cause a conflict.
I have solved it by altering the sequence manually:
from django.db import connection
class MyTestCase(TestCase):
#classmethod
def setUpTestData(cls):
Model.objects.create(id=1)
with connection.cursor() as c:
c.execute(
"""
ALTER SEQUENCE "app_model_id_seq" RESTART WITH 2;
"""
)
I'm trying to write an internal API in my application without necessarily coupling it with the database.
class Product(models.Model):
name=models.CharField(max_length=4000)
price=models.IntegerField(default=-1)
currency=models.CharField(max_length=3, default='INR')
class Image(models.Model):
# NOTE -- Have changed the table name to products_images
width=models.IntegerField(default=-1)
height=models.IntegerField(default=-1)
url=models.URLField(max_length=1000, verify_exists=False)
product=models.ForeignKey(Product)
def create_product:
p=Product()
i=Image(height=100, widght=100, url='http://something/something')
p.image_set.add(i)
return p
Now, when I call create_product() Django throws up an error:
IntegrityError: products_images.product_id may not be NULL
However, if I call p.save() & i.save() before calling p.image_set.add(i) it works. Is there any way that I can add objects to a related object set without saving both to the DB first?
def create_product():
product_obj = Product.objects.create(name='Foobar')
image_obj = Image.objects.create(height=100, widght=100, url='http://something/something', product=product_obj)
return product_obj
Explanation:
Product object has to be created first and then assign it to the Image object because id and name here is required field.
I am wondering why wouldn't you not require to make a product entry in DB in first case? If there is any specific reason then i may suggest you some work around?
EDIT: Okay! i think i got you, you don't want to assign a product to an image object initially. How about creating a product field as null is equal to true.
product = models.ForeignKey(Product, null=True)
Now, your function becomes something like this:
def create_product():
image_obj = Image.objects.create(height=100, widght=100, url='http://something/something')
return image_obj
Hope it helps you?
I got same issue with #Saurabh Nanda
I am using Django 1.4.2. When I read in django, i see that
# file django/db/models/fields/related.py
def get_query_set(self):
try:
return self.instance._prefetched_objects_cache[rel_field.related_query_name()]
except (AttributeError, KeyError):
db = self._db or router.db_for_read(self.model, instance=self.instance)
return super(RelatedManager,self).get_query_set().using(db).filter(**self.core_filters)
# file django/db/models/query.py
qs = getattr(obj, attname).all()
qs._result_cache = vals
# We don't want the individual qs doing prefetch_related now, since we
# have merged this into the current work.
qs._prefetch_done = True
obj._prefetched_objects_cache[cache_name] = qs
That 's make sese, we only need to set property _prefetched_objects_cache for the object.
p = Product()
image_cached = []
for i in xrange(100):
image=Image(height=100, widght=100, url='http://something/something')
image_cached.append(image)
qs = p.images.all()
qs._result_cache = image_cached
qs._prefetch_done = True
p._prefetched_objects_cache = {'images': qs}
Your problem is that the id isn't set by django, but by the database (it's represented in the database by an auto-incremented field), so until it's saved there's no id. More about this in the documentation.
I can think of three possible solutions:
Set a different field of your Image model as the primary key (documented here).
Set a different field of your Production model as the foreign key (documented here).
Use django's database transactions API (documented here).
So I'm trying to return a JSON object for a project. I've spent a few hours trying to get Django just returning the JSON.
Heres the view that we've been working with:
def json(request, first_name):
user = User.objects.all()
#user = User.objects.all().values()
result = simplejson.dumps(user, default=json_util.default)
return HttpResponse(result)
Here's my model:
class User(Document):
gender = StringField( choices=['male', 'female', 'Unknown'])
age = IntField()
email = EmailField()
display_name = StringField(max_length=50)
first_name = StringField(max_length=50)
last_name = StringField(max_length=50)
location = StringField(max_length=50)
status = StringField(max_length=50)
hideStatus = BooleanField()
photos = ListField(EmbeddedDocumentField('Photo'))
profile =ListField(EmbeddedDocumentField('ProfileItem'))
allProfile = ListField(EmbeddedDocumentField('ProfileItem')) #only return for your own profile
This is what it's returning:
[<User: User object>, <User: User object>] is not JSON serializable
Any thoughts on how I can just return the JSON?
With MongoEngine 0.8 or greater, objects and querysets have a to_json() method.
>>> User.objects.to_json()
simplejson.dumps() doesn't know how to "reach into" your custom objects; the default function, json_util.default must just be calling str() or repr() on your documents. (Is json_util custom code you've written? If so, showing its source here could prove my claim.)
Ultimately, your default function will need to be able to make sense of the MongoEngine documents. I can think of at least two ways that this might be implemented:
Write a custom default function that works for all MongoEngine documents by introspecting their _fields attribute (though note that the leading underscore means that this is part of the private API/implementation detail of MongoEngine and may be subject to change in future versions)
Have each of your documents implement a as_dict method which returns a dictionary representation of the object. This would work similarly to the to_mongo method provided on documents by MongoEngine, but shouldn't return the _types or _cls fields (again, these are implementation details of MongoEngine).
I'd suggest you go with option #2: the code will be cleaner and easier to read, better encapsulated, and won't require using any private APIs.
As dcrosta suggested you can do something like this, hope that will help you.
Document definition
class MyDocument(Document):
# Your document definition
def to_dict(self):
return mongo_to_dict_helper(self)
helper.py:
from mongoengine import StringField, ListField, IntField, FloatField
def mongo_to_dict_helper(obj):
return_data = []
for field_name in obj._fields:
if field_name in ("id",):
continue
data = obj._data[field_name]
if isinstance(obj._fields[field_name], StringField):
return_data.append((field_name, str(data)))
elif isinstance(obj._fields[field_name], FloatField):
return_data.append((field_name, float(data)))
elif isinstance(obj._fields[field_name], IntField):
return_data.append((field_name, int(data)))
elif isinstance(obj._fields[field_name], ListField):
return_data.append((field_name, data))
else:
# You can define your logic for returning elements
return dict(return_data)