I just realized that with a legacy table I'm using in a django app that a varchar(5) field (for example) is rendered in python as a models.CharField(max_length=15) field. This 3x size for the max length is very consistent across many other fields.
Why? or more importantly if I changed the django definition to be models.CharField(max_length=5) would I break anything?
It is probably a manual error by someone who tried to write models.
No. It doesn't break anything if you change it to 5. Not only that, you should change it to 5, so your form validation itself will take care of that length where you have that field.
Related
Consider a model Section that is displayed on a site and created / edited by a user using the Django admin interface. I would like to add a field that allows the user to easily control the order in which sections are displayed on the site. The easiest option seems to be to allow for an integer field that is auto-incremented but can be edited by the user -- akin to what the built-in AutoField does.
However, to make editing the order easier, I would like to increment the fields default value by 10 every time, to allow the user to shift sections around more easily. The first section would get order=1, the next one order=11 and so on, that way a section can be wedged in between those first two by giving it, e.g., order=6.
Is there a way I can reuse AutoField to achieve this purpose? And if no, how could I best achieve this type of ordering scheme?
Ideally, what I'd like to achieve should look like this:
from django.db import models
class Section(models.Model):
text = models.TextField()
order = AutoField(step=10)
class Meta:
ordering = ('order',)
Autofield won't work. Not editable, needs to be primary key.
I also suggest to solve this visually using drag and drop in the UI and then reorder all sections within the whole, rather then allow wedging. If two people at the same time wedge 25 within 20 and 30, you still have the same problem. Reordering on save is a much cleaner solution, especially when using select_for_update:
Returns a queryset that will lock rows until the end of the transaction, generating a SELECT ... FOR UPDATE SQL statement on supported databases.
Some of my models are only unique in a combination of keys. I don't want to use an auto-numbering id as the identifier as subsets of the data will be exported to other systems (such as spreadsheets), modified and then used to update the master database.
Here's an example:
class Statement(models.Model):
supplier = models.ForeignKey(Supplier)
total = models.DecimalField("statement total", max_digits=10, decimal_places=2)
statement_date = models.DateField("statement date")
....
class Invoice(models.Model):
supplier = models.ForeignKey(Supplier)
amount = models.DecimalField("invoice total", max_digits=10, decimal_places=2)
invoice_date = models.DateField("date of invoice")
statement = models.ForeignKey(Statement, blank=True, null=True)
....
Invoice records are only unique for a combination of supplier, amount and invoice_date
I'm wondering if I should create a slug for Invoice based on supplier, amount and invoice_date so that it is easy to identify the correct record.
An example of the problem of having multiple related fields to identify the right record is django-csvimport which assumes there is only one related field and will not discriminate on two when building the foreign key links.
Yet the slug seems a clumsy option and needs some kind of management to rebuild the slugs after adding records in bulk.
I'm thinking this must be a common problem and maybe there's a best practice design pattern out there somewhere.
I am using PostgreSQL in case anyone has a database solution. Although I'd prefer to avoid that if possible, I can see that it might be the way to build my slug if that's the way to go, perhaps with trigger functions. That just feels a bit like hidden functionality though, and may cause a headache for setting up on a different server.
UPDATE - after reading initial replies
My application requires that data may be exported, modified remotely, and merged back into the master database after review and approval. Hidden autonumber keys don't easily survive that consistently. The relation invoices[2417] is part of statements[265] is not persistent if the statement table was emptied and reloaded from a CSV.
If I use the numeric autonumber pk then any process that is updating the database would need to refresh the related key numbers or by using the multiple WITH clause.
If I create a slug that is based on my 3 keys but easy to reproduce then I can use it as the key - albeit clumsily. I'm thinking of a slug along the lines:
u'%s %s %s' % (self.supplier,
self.statement_date.strftime("%Y-%m-%d"),
self.total)
This seems quite clumsy and not very DRY as I expect I may have to recreate the slug elsewhere duplicating the algorithm (maybe in an Excel formula, or an Access query)
I thought there must be a better way I'm missing but it looks like yuvi's reply means there should be, and there will be, but not yet :-(
What you're talking about it a multi-column primary key, otherwise known as "composite" or "compound" keys. Support in django for composite keys today is still in the works, you can read about it here:
Currently Django models only support a single column in this set,
denying many designs where the natural primary key of a table is
multiple columns [...] Current state is that the issue is
accepted/assigned and being worked on [...]
The link also mentions a partial implementation which is django-compositekeys. It's only partial and will cause you trouble with navigating between relationships:
support for composite keys is missing in ForeignKey and
RelatedManager. As a consequence, it isn't possible to navigate
relationships from models that have a composite primary key.
So currently it isn't entirely supported, but will be in the future. Regarding your own project, you can make of that what you will, though my own suggestion is to stick with the fully supported default of a hidden auto-incremented field that you don't even need to think about (and use unique_together to enforce the uniqness of the described fields instead of making them your primary keys).
I hope this helps!
No.
Model needs to have one field that is primary_key = True. By default this is the (hidden) autofield which stores object Id. But you can set primary_key to True at any other field.
I've done this in cases, Where i'm creating django project upon tables which were previously created manually or through some other frameworks/systems.
In reality - you can use whatever means you can think of, for joining objects together in queries. As long as query returns bunch of data that can be associated with models you have - it does not really matter which field you are using for joins. Just keep in mind, that the solution you use should be as effective as possible.
Alan
In my models I need to store a mobile number in the following format 447182716281. What field should I use? Does Django have anything to support this?
example
mobile = models.IntegerField(max_length=12)
Phone numbers must be CharFields. Integer field will not preserve leading 0, +, and spacing.
I think is a interesting question since it really depends on the problem modeling, CharField works fine, but have a look at this:
ORM tricks
There is a regex field in form validation. In model use just CharField.
On using models.IntegerField(default=0)
for larger number it gives error
Ensure this value is less than or equal to 2147483647.
So better way to use would be.
BigIntegerField
A 64-bit integer, much like an IntegerField except that it is guaranteed to fit numbers from -9223372036854775808 to 9223372036854775807. The default form widget for this field is a TextInput.
with PostgreSQL IntegerField is not working properly so it's better to use CharField with Django.
Number=models.CharField(max_length=12)
Django's documentation is quite clear about storing empty strings as "" rather than NULL at a database level (so there is only one possible format for empty data):
Note that empty string values will always get stored as empty strings, not as NULL. Only use null=True for non-string fields such as integers, booleans and dates. For both types of fields, you will also need to set blank=True if you wish to permit empty values in forms, as the null parameter only affects database storage (see blank).
Nonetheless, after adding a new field, I've started encountering IntegrityErrors on the new field (phone_number).
null value in column "phone_number" violates non-null constraint
That model looks like this with the new field (I performed a migration via south):
class Person(models.Model):
user = models.ForeignKey(User)
description = models.TextField(blank=True)
phone_number = models.CharField(blank=True)
I've since (temporarily) resolved the issue by setting null=True on phone_number, but now I have hundreds of entries with empty strings, and a single NULL value in my database. (I also tried adding default='' to the phone_number field, but I was still seeing IntegrityError issues.)
In the past I've always used MySQL, but on this project I'm using Postgres. The generated SQL insert attempt is:
'INSERT INTO "people_person" ("user_id", "description", "gender", "birthdate", "default_image_id", "zip_code", "beta_status") VALUES (%s, %s, %s, %s, %s, %s, %s) RETURNING "people_person"."id"'.
My expectation would be that Django would be inserting a blank string into the "phone_number" column, but it doesn't appear to be doing so. The other thing I might expect would be Django to include a SET DEFAULT in the CREATE TABLE statement, but it doesn't. So Postgres gets angry about the NOT NULL on that column.
Thanks!
As is usually the case with problems that are so seemingly intractable, the issue at hand was user error.
My application had two entry points - two WSGI files, but only one code base. Normally, Apache will only reload your code if the file is touched. My deploy script was only touching one of those WSGI files - which meant that people reaching my site via the other WSGI file were still seeing old code. Worse, the database was modified under that old code, but the models were still as they were before.
This in turn caused the IntegrityError issues. Django didn't know about the phone_number field, so even though I had set blank=True, Django made no effort to insert a blank value - and the database of course thought that meant NULL.
This caused a series of different to track down errors, including the above error.
It's amazing how often really tough issues like these are caused by dumb minor omissions - like a deploy script I wrote 2 months ago and forgot to update.
Thanks for reading folks, I've upvoted the other answers, but I need to accept mine since it was ultimately the solution.
I discovered that if you explicitly set the field value to None you will still get these errors. In other words the default= thing is applied as soon as you create the python object, rather then when you save it to the database.
I guess that is reasonable but it was a bit unexpected.
I have a field in a model that I want users to feel like they can write an arbitrary amount of text in. Django provides a CharField and a TextField in the models. I assume that the difference is that one of them is a char(max_length) and the other is a varchar internally.
I am tempted to use the TextField, but since it doesn't respect max_length, I am somewhat wary of someone dumping loads of data into it and DOSing my server. How should I deal with this?
Thanks!
Fields in model only represent the way data is stored in database.
You can very easily enforce maximum length in form which will validate users' input. Like this.
class InputForm(forms.Form):
text = forms.CharField(max_length=16384, widget=forms.TextArea)
...
This will make sure the maximum length user can successfully enter is 16k.