Trying to create a fixture for a model with an auto_now_add date time field
created_at = models.DateTimeField(auto_now_add=True)
When the fixtures is loaded there is an error thrown IntegrityError: Problem installing fixture, null value in column "created_at" violates not-null constraint
Is there a way to have Django determine the date rather than manually entering a date?
[
{
"model": "customer.type",
"pk": 1,
"fields": {
"name": "type",
"created_by": 1
}
}
]
One of the easy workaround will be using the default--[DjangoDoc] and editable--[DjangoDoc] arguments together,
from django.utils import timezone
class Foo(models.Model):
...
created_at = models.DateTimeField(default=timezone.now, editable=False)
The above solution tested and verified under Django 2.1 and Python 3.6 environment.
Drawback of this method
From the Django-Doc of DateField.auto_now_add
Automatically set the field to now when the object is first created. Useful for creation of timestamps. Note that the current date is always used; it’s not just a default value that you can override. So even if you set a value for this field when creating the object, it will be ignored. If you want to be able to modify this field, set the following instead of auto_now_add=True
Which means, this setting will override the timezone.now() value if you manually provide any valid datetime.
Related
I am using Djongo v1.3.6 to connect Django to MongoDB. Now I would like to have an optional field for a unique value - in my case a phone number. I thought it is possible to have null as a placeholder for accounts that do not have a phone number. However, MongoDB seems to be treating null as a unique value as well. Thus, it is not letting me insert new objects into the database once one object has phone_number: null
I tried to declare the index for phone_number as sparse but it does not seem to take any effect. I searched the Internet for some time now but could not find anything useful for my case.
class UserProfile(models.Model):
user = models.OneToOneField(AUTH_USER_MODEL, on_delete=models.CASCADE)
phone_number = models.CharField(validators=[PHONE_VALIDATOR], max_length=17, unique=True, null=True, blank=True)
...
meta = {
'indexes': [
{'fields': ['phone_number'], 'sparse' : True, 'unique' : True},
],
}
Any help is very appreciated.
I solved this issue altering the index that is created by Djongo using pymongo.
from pymongo import MongoClient, database, collection
collection.drop_index(index_name)
index_name = rchop(index_name, '_1')
collection.create_index(
[(index_name, pymongo.ASCENDING)],
partialFilterExpression = {
index_name : { "$exists" : True, "$gt" : "0", "$type" : "string" }
}
)
Once I had altered the index, I was able to insert null values into MongoDB without sacrificing the unique check for non-null values
I am trying to generate a Django model that can handle multiple values in a single field. As such, when the first field is queried through a view, a user should select a value for the second field through a select box.
To give a background of the problem, my seeding fixture looks like this...
[
{
"model":"myapp.location",
"pk":1,
"fields":{
"county": "countyname",
"places":{
"name": "placename",
"name": "placename",
"name": "placename",
"name": "placename",
"name": "placename"
}
}
}
]
In the above scenario, location is the intended name of my model. Now, through a form, I want a user to be presented with 'countynames'. Upon selecting a countyname, the user should be presented with 'placenames' in the selected county for them to choose.
I have tried the following format for the model...
class Location(models.Model):
county = models.CharField(max_length=100)
places = models.CharField(max_length=100, choices=places.name)
def __str__(self):
return self.countyname
Now, I know that the error that is thrown, ('places' is not defined), is warranted. I was asking whether there is a way to define it (places), as it is in the fixture, or if anyone has a better implementation for such a model... any alternative way is welcome and appreciated as I can't think of anything at this point.
So, after fiddling with two models and foreign keys as suggested in the comments above, I decided to amend the model, which also led to changing the fixture. I read about ArrayFields in Postgres + Django here. I amended the field 'places' to be an ArrayField as shown:
from django.contrib.postgres.fields import ArrayField
class Location(models.Model):
county = models.CharField(max_length=100)
places = ArrayField(models.CharField(max_length=100), blank=True)
def __str__(self):
return self.county
Next, it was just a matter of changing the JSON fixture to:
[
{
"model":"myapp.location",
"pk":1,
"fields":{
"county": "countyname",
"places":["placename","placename","placename","placename"]
}
}
]
After running python manage.py loaddata fixturename.json ,
it worked and the DB was seeded!
I am attempting to create many model instances in one POST using a Mixin to support POST of arrays.
My use case will involve creating 1000s of model instances in each call. This very quickly becomes slow with DRF due to each model being created one at a time.
In an attempt to optimise the creation, I have changed to use bulk_create(). While this does result in a significant improvement, I noticed that for each model instance being created, a SELECT statement was being run to get the ForeignKey, which I traced to the call to serializer.is_valid().
As such, adding n instances would result in n SELECT queries to get the ForeignKey and 1 INSERT query.
As an example:
Models (using automatic ID fields):
class Customer(models.Model):
name = models.CharField(max_length=100, blank=False)
joined = models.DateTimeField(auto_now_add=True)
class Order(models.Model):
customer = models.ForeignKey(Customer, on_delete=models.CASCADE)
timestamp = models.DateTimeField()
price = models.FloatField()
POST data to api/orders/:
[
{
"customer": 13,
...
},
{
"customer": 14,
...
},
{
"customer": 14,
...
}
]
This would result in 3 SELECT statements to get the Customer for each of the Orders, followed by 1 INSERT statement to push the data in.
Similar to prefetch_related() for queries when fetching data in GET requests, is there any way to avoid performing so many queries when deserializing and validating (such as setting the serializer to prefetch foreign keys)?
I'm trying to add records via the code below:
Post.objects.update_or_create(
user=user,
defaults={
"title": external_post.get('title'),
"body": external_post.get('body'),
"seo": external_post.get('seo')
}
)
I've successfully migrated the model but I'm getting the error " null value in column "created_at" violates not-null constraint".
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
I faced this same problem when I was using the #transaction atomic decorator in Django. Basically the reason why I faced the same error was that I was not using the default auto-increment ID in one of my models but rather I had specified a particular field as the primary key using primary_key=True
As a result my data contained two primary keys that were the same. This resulted in an 'update' operation rather than a 'create' operation in the database.
So, Django was trying to update an entry but the created_at field was missing hence the error.
I would suggest you do this instead:
post,created = Post.objects.update_or_create(
user=user,
defaults={
"title": external_post.get('title'),
"body": external_post.get('body'),
"seo": external_post.get('seo')
})
if created:
# your code goes here
#(this ensures that the code is executed only if an entry is getting created in the database)
You can read this for a better explaination: https://code.djangoproject.com/ticket/17654
I have a simple Django model similar to this:
class TestModel(models.Model):
test_field = LowerCaseCharField(max_length=20, null=False,
verbose_name='Test Field')
other_test_field = LowerCaseCharField(max_length=20, null=False, unique=True,
verbose_name='Other Test Field')
Notice that other_test_field is a unique field. Now I also have some data stored that looks like this:
[
{
test_field: "object1",
other_test_field: "test1"
},
{
test_field: "object2",
other_test_field: "test2"
}
]
All I'm trying to do now is switch the other_test_field fields in these two objects, so that the first object has "test2" and the second object has "test1" for other_test_field. How do I accomplish that while preserving the uniqueness? Ultimately I'm trying to update data in bulk, not just swapping two fields.
Anything that updates data in serial is going to hit an IntegrityError due to unique constraint violation, and I don't know a good way to remove the unique constraint temporarily, for this one operation, before adding it back. Any suggestions?