I use django 1.7 to create a postgresql 9.3 db with several tables that contain foreign-key constraints. The DB is used for a warehouse in which objects have a physical location. If an object (in table Stored_objects) is deleted, I'd also would like to delete it's position, so that my model for the location looks like:
class object_positions(models.Model):
obj_id = models.ForeignKey(Stored_objects, db_column='obj_id',on_delete=models.CASCADE)
(...)
The constraint in the db (after syncdb) however looks like this:
ALTER TABLE object_positions
ADD CONSTRAINT stored_obj_fkey FOREIGN KEY (obj_id)
REFERENCES "Stored_objects" (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION;
Is there something else I have to do to get this constraint right in the db?
Django uses it's own code to handle cascades, they're not implemented on a database level. The main reason is to maintain consistent behaviour across backends, and to allow model signals to fire for cascaded deletions. If for some reason you want the constraint on a database level, you'll have to edit the table yourself. I wouldn't recommend that unless you have a compelling reason (such as another app accessing the database, bypassing Django).
Related
I am trying to enforce a constraint between two foreign keys on a join table, and I don't know whether I can do it using the database, or whether I should do it through my app, or my ORM.
Here are my tables:
Dataset
Tag
- Dataset: FK
- name: string (eg: "park", "church", etc)
Place
- Dataset: FK
- latitude
- longitude
PlaceTag (my join table)
- Tag: FK
- Place: FK
- note: string (eg: "this place is my favorite park")
I want to enforce the constraint that each PlaceTag has a Tag and a Place that belong to the same Dataset. Should I do this using the database, or my app? Or should I re-structure my models to enforce this constraint more easily?
FWIW, this is an open-source project, and my PR for creating these tables is up here: https://github.com/mapseed/api/pull/161/files The project is using Django, if that helps.
One way of "enforcing" (note the quotation marks) this in Django would be to override the PlaceTag's save() method. In there you can raise an exception whenever self.place.dataset != self.tag.dataset. Yet you should note that there are situations in which Django will not call the custom save() method of a model:
When calling the update() method on a queryset. This method is meant for bulk updates and hence, for performance reasons, proceeds with the update directly at a database level (reference).
Inside (data) migrations custom save() methods are not available.
In these two situations the approach I propose will not be useful to enforce the constraint (hence the quotation marks at the beginning). This is of course not the same and not as strong as enforcing this at a database level. Anyway I don't think there is a portable way (i.e. available in any or most SQL database engines) of enforcing such a condition since checking it will require a join on other tables, yet I may be wrong on this one.
I have a concern with django subqueries using the django ORM. When we fetch a queryset or perform a DB operation, I have the option of bypassing all assumptions that django might make for the database that needs to be used by forcing usage of the specific database that I want.
b_det = Book.objects.using('some_db').filter(book_name = 'Mark')
The above disregards any database routers I might have set and goes straight to 'some_db'.
But if my models approximately look like so :-
class Author(models.Model):
author_name=models.CharField(max_length=255)
author_address=models.CharField(max_length=255)
class Book(models.Model):
book_name=models.CharField(max_length=255)
author=models.ForeignKey(Author, null = True)
And I fetch a QuerySet representing all books that are called Mark like so:-
b_det = Book.objects.using('some_db').filter(book_name = 'Mark')
Then later if somewhere in the code I trigger a subquery by doing something like:-
if b_det:
auth_address = b_det[0].author.author_address
Then this does not make use of the original database 'some_db' that I had specified early on for the main query. This again goes through the routers and picks up (possibly) the incorrect database.
Why does django do this. IMHO , if I had selected forced usage of database for the original query then even for the subquery the same database needs to be used. Why must the database routers come into picture for this at all?
This is not a subquery in the strict SQL sense of the word. What you are actually doing here is to execute one query and use the result of that to find related items.
You can chain filters and do lots of other operations on a queryset but it will not be executed until you take a slice on it or call .values() but here you are actually taking a slice
auth_address = b_det[0].#rest of code
So you have a materialized query and you are now trying to find the address of the related author and that requires another query but you are not using with so django is free to choose which database to use. You cacn overcome this by using select_related
I am using django and have three objects: Customer, Location and Department. Each has a related Setting object.
Is it better form to create a single table with optional/null foreign keys?
Or to create a different setting object/table for each of the 3 entities?
There are a few options
Create a separate Settings table and have a nullable ForeignKey from all of your objects to the Settings table. If you choose this option, you should create an abstract base class that has a ForeignKey to the Settings table and inherit from that abstract base class. That way you don't have to add the ForeignKey every time you create a new model.
Create a separate Settings table and use GenericForeignKeys from the Settings table to reference your object (Customer, Location, and Department). This has the advantage of not having an extra column in all of your tables that need settings. However, you can't do DB joins with GenericForeignKeys via the Django ORM's normal API. You'd have to use raw sql. Also, select_related doesn't work on GenericForeignKeys so you'd have to use prefetch_related instead.
Store the settings in a column in the database. You should interact with the data in some format (I like JSON) and then serialize it to a string to store in the DB. Then to read the settings, you could deserialize the string back into JSON and interact with it. With this method, you wouldn't need to join with another table to get settings, and wouldn't need to run migrations every time you added new settings. You also wouldn't need a separate Settings table. However, constructing a query to find objects with certain settings would be a pain the query would probably be slow as well.
Each option has its pros and cons; so, pick your poison ;)
I wanted to create an email authenticated django user model, and I basically followed the steps in this website:
http://www.micahcarrick.com/django-email-authentication.html
And also included the table alteration code in a post_syncdb function in a managmenet module, to make the email a unique identifier. This should work ok with MySql. BUT, it wont work for sqlite. This is because sqlite's table alteration is limited and wont allow you to change that attribute OR even add a column with a unique identifier.
If there is no elegant way of doing this, then I might have to switch to MySql.
http://www.sqlite.org/faq.html#q26
So, it UNIQUE is fully supported, but you cannot alter a table using UNIQUE. So dump the table to a new table that has the UNIQUE constraint then alter and rename the tables. Or just dump it, modify the dump and reimport it.
I think, in your post_syncdb hook, you can add:
cursor.execute(
"CREATE UNIQUE INDEX IF NOT EXISTS auth_user_email_unique "
"ON auth_user (email COLLATE NOCASE);"
)
you may have to break out different blocks based on settings.DATABASES['default']['ENGINE']
I've been looking for a way to define database tables and alter them via a Django API.
For example, I'd like to be write some code which directly manipulates table DDL and allow me to define tables or add columns to a table on demand programmatically (without running a syncdb). I realize that django-south and django-evolution may come to mind, but I don't really think of these tools as tools meant to be integrated into an application and used by and end user... rather these tools are utilities used for upgrading your database tables. I'm looking for something where I can do something like:
class MyModel(models.Model): # wouldn't run syncdb.. instead do something like below
a = models.CharField()
b = models.CharField()
model = MyModel()
model.create() # this runs the create table (instead of a syncdb)
model.add_column(c = models.CharField()) # this would set a column to be added
model.alter() # and this would apply the alter statement
model.del_column('a') # this would set column 'a' for removal
model.alter() # and this would apply the removal
This is just a toy example of how such an API would work, but the point is that I'd be very interested in finding out if there is a way to programatically create and change tables like this. This might be useful for things such as content management systems, where one might want to dynamically create a new table. Another example would be a site that stores datasets of an arbitrary width, for which tables need to be generated dynamically by the interface or data imports. Dose anyone know any good ways to dynamically create and alter tables like this?
(Granted, I know one can do direct SQL statements against the database, but that solution lacks the ability to treat the databases as objects)
Just curious as to if people have any suggestions or approaches to this...
You can try and interface with the django's code that manages changes in the database. It is a bit limited (no ALTER, for example, as far as I can see), but you may be able to extend it. Here's a snippet from django.core.management.commands.syncdb.
for app in models.get_apps():
app_name = app.__name__.split('.')[-2]
model_list = models.get_models(app)
for model in model_list:
# Create the model's database table, if it doesn't already exist.
if verbosity >= 2:
print "Processing %s.%s model" % (app_name, model._meta.object_name)
if connection.introspection.table_name_converter(model._meta.db_table) in tables:
continue
sql, references = connection.creation.sql_create_model(model, self.style, seen_models)
seen_models.add(model)
created_models.add(model)
for refto, refs in references.items():
pending_references.setdefault(refto, []).extend(refs)
if refto in seen_models:
sql.extend(connection.creation.sql_for_pending_references(refto, self.style, pending_references))
sql.extend(connection.creation.sql_for_pending_references(model, self.style, pending_references))
if verbosity >= 1 and sql:
print "Creating table %s" % model._meta.db_table
for statement in sql:
cursor.execute(statement)
tables.append(connection.introspection.table_name_converter(model._meta.db_table))
Take a look at connection.creation.sql_create_model. The creation object is created in the database backend relevant to the database you are using in your settings.py. All of them are under django.db.backends.
If you must have ALTER table, I think you can create your own custom backend that extends an existing one and adds this functionality. Then you can interface with it directly through a ExtendedModelManager you create.
Quickly off the top of my head..
Create a Custom Manager with the Create/Alter methods.