I have Django project with DigestIssue model among others and there were Django auto created primary key field id and my field number. I wanted to get rid of duplication and set number as PK because number is unique and with same values as id. But I have foreign keys referencing this model.
I doubt that they will migrate automatically after such operation. I tried, hoping for such automigration and got constraint "idx_16528_sqlite_autoindex_gatherer_digestissue_1" of relation "gatherer_digestissue" does not exist error ("sqlite" in constraint name is historical thing, I switched to PostgreSQL a time go). I tried more complicated way, following https://blog.hexack.fr/en/change-the-primary-key-of-a-django-model.html but got same error on PK switching step.
So the question is - how to replace in Django old primary key with new one with same values and referenced by other models?
Related
I am gradually updating a legacy Django application from 1.19 -> 2.2 and beyond. To upgrade to 2.2, I just added on_delete=models.CASCADE to all models.ForeignKey fields that did not have the error (which I also had to do retroactively for existing migrations, apparently...).
Possibly related/unrelated to that, when I run manage.py migrate, Django throws the following error (I shortened the table/field names for brevity):
django.db.utils.IntegrityError: The row in table 'X' with primary key '3' has an invalid foreign key: X.fieldname_id contains a value '4' that does not have a corresponding value in Y__old.id.
Note in particular the __old.id suffix for the db table that Django expects to contain a row with id 4. When manually inspecting the db, the table Y does really contain a valid row with id 4! I'm assuming, to support the migration, Django is making some temporary tables suffixed with __old and somehow it is unable to migrate said data?
The db row Y in question is really simple: a char, boolean, and number column.
Edit: seems to be related to an old Django bug with SQLite. Not sure how to solve. It does not seem to occur for Django 2.1.15, and starts to occur in Django 2.2.
This problem is caused by the mentioned Django bug, but if you get the migration error, your database is broken already.
When you dump the DB to SQL, you can see REFERENCES statements which point to tables ending in __old, but these tables do not actually exist:
$> sqlite3 mydb.db .dump | grep '__old'
CREATE TABLE IF NOT EXISTS "company" [...]"account_id" integer NULL REFERENCES "account__old" ("id") [...]
Fortunately, the DB can be fixed easily, by just removing the __old and dumping into a new database. This can be automated with sed:
sqlite3 broken.db .dump | sed 's/REFERENCES "\(.[^"]*\)__old"/REFERENCES "\1"/g' | sqlite3 fixed.db
It is not an ideal solution, but you can manually delete the row from the database or set the value of the foreign key to a temporary value, migrate and then restore the original value.
I have legacy code which had no foreign keys defined in the schema.
The raw data for the row includes the key value of the parent, naturally.
My first porting attempt to postgresql just updated the field with the raw value: I did not add foreign keys to Django's models.
Now I am trying to add foreign keys to make the schema more informative.
When I add a foreign key, django's update requires me to provide an instance of the parent object: I can no longer update by simply providing the key value. But this is onerous because now I need to include in my code knowledge of all the relations to go and fetch related objects, and have specific update calls per model. This seems crazy to me, at least starting from where I am, so I feel like I am really missing something.
Currently, the update code just pushes rows in blissful ignorance. The update code is generic for tables, which is easy when there are no relations.
Django's model data means that I can find the related object dynamically for any given model, and doing this means I can still keep very abstracted table update logic. So this is what I am thinking of doing. Or just doing raw SQL updates.
Does a solution to this already exist, even if I can't find it? I am expecting to be embarrassed.
The ValueError comes in django ORM code which knows exactly which model it expects and what the related field is: the missing step if to find the instance of related object.
db.models.fields.related_descriptors.py:
in this code, which throws the exception, value is supposed to be an instance of the parent model. Instead, value is the key value. This basically I think tells me how I can inspect the model to deal with this in advance, but I wonder if I am re-inventing the wheel.
if value is not None and not isinstance(value, self.field.remote_field.model._meta.concrete_model):
raise ValueError(
'Cannot assign "%r": "%s.%s" must be a "%s" instance.' % (
value,
instance._meta.object_name,
self.field.name,
self.field.remote_field.model._meta.object_name,
)
)
You could use _id suffix to set id value directly
For given model
class Album(models.Model):
artist = models.ForeignKey(Musician, on_delete=models.CASCADE)
You can set artist by id in following manner
Album.objects.create(artist_id=2)
I am trying to implement a unique_together constraint on one of the models in my Django project. The decision to have a unique constraint was taken after test data was already created in the table.
Now while running migrations, I came across the following error:
django.db.utils.IntegrityError: UNIQUE constraint failed:
movt_frt.frt_group_id,
movt_frt.rec_loc_id,
movt_frt.dis_loc_id
I have tried creating similar unique constraints on table that previously held no data and migrations happened successfully.
My question is:
Am I right in concluding that the Migration is failing because the table already has data residing in it?
Is there a way to do some changes to the migration file on the lines as discussed here and attempt migration again, to run successfully?
I am using Django ver. 2.0.6
I don't know if this question is still relevant. It's not so much that the table is filled with data. These data already contains combinations that conflict with your specified 'unique_together'.
Overruling is not possible, but if the table is not too big you could correct it manually.
I am having a probelm with web2py foreign keys, as this is lack of documentation on this its pretty fustrating. I define tables in different files most tables are related. I used to use db.table_name to denote foreign keys, but I was told to use 'reference tbl_name'. This however makes no difference whatsoever, I still get errors compalaining about models defined in later files as per alphabetical order rules for web2py. It seems it actually matters having tables in order, rendering the use of reference key word useless at best. Or am I missing somethng here.
I would assume you are using mysql. In such case, it is a mysql error. mysql raises error no 150 (foreign key error). So, it is a database backend issue rather than a web2py problem. Please do check the documentation for your database server. How I solved the issue is in the db.py file declare (db.define_table) all the primary tables that require no foreign keys first and then continue declaring the tables that require the foreign keys (Just ensure that the foreign table exists before referencing it). In case you are using individual files for different tables in the models folder, then try finding out which tables are primary and make sure that they are created first by prepending the filenames with numbers so that before referencing a foreign key in a table, the 'foreign' table will already be existing in the database.
However, there should be no issues with sqlite.
I had a custom primary key that need to be set up on a particular data in a model.
This was not enough, as an attempt to insert a duplicate number succeeded. So now when i replace primary_key=True to unique=True it works properly and rejects duplicate numbers!!. But according this document (which uses fields).
primary_key=True implies null=False
and unique=True.
Which makes me confused as in why does
it accept the value in the first place
with having an inbuilt unique=True ?
Thank you.
Updated statement:
personName = models.CharField(primary_key=True,max_length=20)
Use an AutoField with primary_key instead.
Edit:
If you don't use an AutoField, you'll have to manually calculate/set the value for the primary key field. This is rather cumbersome. Is there a reason you need ReportNumber to the primary key? You could still have a unique report number on which you can query for reports, as well as an auto-incrementing integer primary key.
Edit 2:
When you say duplicate primary key values are allowed, you indicate that what's happening is that an existing record with the same primary key is updated -- there aren't actually two objects with the same primary key in the database (which can't happen). The issue is in the way Django's ORM layer chooses to do an UPDATE (modify an existing DB record) vs. an INSERT INTO (create a new DB record). Check out this line from django.db.models.base.Model.save_base():
if (force_update or (not force_insert and
manager.using(using).filter(pk=pk_val).exists())):
# It does already exist, so do an UPDATE.
Particularly, this snippet of code:
manager.using(using).filter(pk=pk_val).exists()
This says: "If a record with the same primary key as this Model exists in the database, then do an update." So if you re-use a primary key, Django assumes you are doing an update, and thus doesn't raise an exception or error.
I think the best idea is to let Django generate a primary key for you, and then have a separate field (CharField or whatever) that has the unique constraint.