When I delete an existing record from database and recreated a exactly same record through Django admin interface, the id value shows in Django admin interface is not continuous. For instance, the record with id value is 6 and the previous one is 5, then I delete the one with id 6. When I recreate it, the id value becomes 7 instead of 6. I think it supposes to be 6. It this a error and how can I fix this issue?
That is the correct behaviour. Primary keys should not get re-used, especially to avoid conflicts when it has been referenced in other tables.
See this SO question for more info about it: How to restore a continuous sequence of IDs as primary keys in a SQL database?
If you really want to reset the Auto Increment of PK, you can do ALTER TABLE tablename AUTO_INCREMENT = 1 (for MySQL). Other DBs may have different limitations like not being able to reset to any value lower than the highest used, even if there are gaps (MySQL:InnoDB).
Related
I have a problem trying to delete a row from my db...
the DB looks like that:
[The DB picture]
I want do delete a user so I tried:
DELETE FROM USERS WHERE ID = 201;
but obviously it didn't work out at all because it connects with the other TABLES.
And I cant use DROP because its sqlite.
I look up on the internet and got nothing...
the error:
the error
Your table name is Users and You are using USERS in your command. It will give you error as It should be same as name of table.
DELETE FROM Users WHERE ID = 201;
Let me assume that you have another table that defines the user_id as a foreign key:
create table another (
another_id int,
user_id int,
foreign key (user_id) references users(user_id)
);
And you have data in this table, such as:
another_id user_id
1 200
2 201
3 201
Now, you want to delete 201 from users. What happens to rows 2 and 3 in this table? There are several options:
The rows remain with the values as is. But those values no longer refer to a valid user.
The rows that refer to 201 are deleted.
The rows are set to some value, such as NULL or a default value.
The default behavior without a foreign key constraint is (1). And you end up with dangling references. And your database lacks relational integrity. That is considered a bad thing.
SQL supports cascading delete and update foreign key references (although not all databases support these). These respectively implement (2) and (3) on the above list.
You can also manually change the referring rows so the user can be deleted. It is not clear what you really want to do, but this explains why you can't just delete the row and the facilities that SQL offers to get around that.
a beginner here!
here's how im using url path (from the DRF tutorials):
path('articles/', views.ArticleList.as_view()),
path('articles/<int:pk>/', views.ArticleDetail.as_view())
and i noticed that after deleting an 'Article' (this is my model), the pk stays the same.
an Example:
1st Article pk = 1, 2nd Article pk = 2, 3rd Acrticle pk = 3
after deleting the 2n Arctile im expecting --
1st Article pk = 1, 3rd Artcile pk = 2
yet it remains
3rd Artile pk = 3.
is there a better way to impleten the url, maybe the pk is not the variable im looking for?
or i should update the list somehow?
thnkx
and I noticed that after deleting an Article (this is my model), the pk stays the same.
This is indeed the expected behaviour. Removing an object will not "fill the gap" by shifting all the other primary keys. This would mean that for huge tables, you start updating thousands (if not millions) of records, resulting in a huge amount of disk IO. This would make the update (very) slow.
Furthermore not only the primary keys of the table that stores the records should be updated, but all sorts of foreign keys that refer to these records. This thus means that several tables need to be updated. This results in even more disk IO and furthermore it can result in slowing down a lot of unrelated updates to the database due to locking.
This problem can be even more severe if you are working with a distributed system where you have multiple databases on different servers. Updating these databases atomically is a serious challenge. The CAP theorem [wiki] demonstrates that in case a network partition failure happens, then you either can not guarantee availability or consistency. By updating primary keys, you put more "pressure" on this.
Shifting the primary key is also not a good idea anyway. It would mean that if your REST API for example returns the primary key of an object, then the client that wants to access that object might not be able to access that object, because the primary key changed in between. A primary key thus can be seen as a permanent identifier. It is usually not a good idea to change the token(s) that a client uses to access an object. If you use a primary key, or a slug, you know that if you later refer to the same item, you will again retrieve the same item.
how to 'update' the pk after deleting an object?
Please don't. Sorting elements can be done with a timestamp, but that is something different than having an identifier space that does not contain any gaps. A gap is often not a real problem, so you better do not turn it into a real problem.
Yesterday we've added a simple form to our site, and we just implemented an API View using Django which is connected to a PostgreSQL database.
Today I queried the database to see how many rows are submitted, and I encountered a strange thing in the results, We've created and migrated our model using Django ORM, so the primary key is defined as an auto-increment integer field, the problem is row ids are not continuous and they are so diverse, when I'm writing this question, the max id value is 252, but we have only 72 records in the table,
I've seen this before in other tables, but those tables were subjected to delete and update queries, but we only insert to this new table, and my question is: is our data deleted or it's a normal behavior in PostgreSQL?
I've searched in google and it seems that the only way is to check WAL logs, but we have not enabled that for our database yet, is there another way to check that the data is consistent or not?
Thanks.
Expect holes in a sequence
If you have multiple connections to a database that are adding rows, then you should expect to see holes in the sequence number results.
If Alice is adding a row, she may bump the sequence from 10 to 11 while not yet doing a COMMIT. Meanwhile, Bob adds a record, bumping the sequence to 12, and assigning 12 to his row, which he now commits. So the database has stored rows with ID field values of 10 and 12, but not 11.
If Alice commits, then 11 will appear in a query.
If Alice does a ROLLBACK, then 11 will never appear in a query.
I am have a Django 1.7rc project running on multiple app servers and a MySQL.
I have noticed the primary key of a model has gaps, eg, from 10001 jumps to 10003, 10011 jumps to 10014. I cannot figure out why, there is no code to delete the records directly, however it could be cascade deleted, which I will investigate further.
order = Order(cart=cart)
order.billing_address = billing_address
order.payment = payment
order.account = account
order.user_uuid = account.get('uuid')
order.save()
Thought I would ask here if this is normal on a multiple app server setup?
Gaps in a primary key are normal (unless you're using a misconfigured SQLite table, which does not use a monotonic PK by default) and help to maintain referential integrity. Having said that, they are usually only caused by deletions or updates within the table, cascaded or otherwise. Verify that you have no code which may delete or update the PK in that table, directly or indirectly.
I want to predict the Unique ID that I will get from the next row that will be created in MYSQL via C++
For example if I create a user in my database I want to predict the unique ID (Auto incremented) of the user that will be created in the database using one query. This is for security purposes so I have no real idea how to go about this. Any nudges in the right direction would be great, thank you.
To sum it up: I want to predict the next Unique ID of a user in a database and return it whilst the user is created, one last note, the Unique ID is auto incremented.
You do not want to predict the value as there is no guarantee it will be anywhere close to accurate.
For example, if a something attempts to add a user, but fails, the auto increment field is usually already updated (so you may have users for 1...N, but since N+1 failed, the next ID would be N+2, not N+1).
You can use mysql_insert_id() to get the last id that was added by your connection, but you cannot really get a prediction for what the next value will be (at least not accurately).