I read the documentation for cascading delete, but one thing I'm not clear on is if it's possible to set up a database so that a child row can be shared by more than one parent and the child row only be deleted when the last referencing parent row is deleted?
(Basically I want foreign keys that act like a std::shared_ptr.)
If that's not possible with the built-in CASCADE DELETE setting, can it be done with a trigger? What would that look like?
The final option I have is that although it is possible in the library I'm writing to create this shared-reference situation, I could simply make it throw an exception when an attempt to construct such a thing takes place.
Specifically, I have a self referencing table that stores something like an abstract syntax tree. Each node has an operation and two child nodes. In my C++ program which is using the database, the objects which represent rows in this table have overloaded operators which return values that simultaneously cause rows to be created in the table. For instance "a << b" would return a temporary value "c" and the database would have a row like: (c_id, '<<', a_id, b_id). However you could follow that with a call to "a << x", in which case the database would contain two rows referencing "a".
The C++ classes are written to insert the table row in their constructor and delete the table row in their destructor. Essentially, I want the creation and destruction of temporary objects in the C++ code to be mirrored in the state of the database. But I want to suppress deleting a child row if it is still referenced by a different parent.
This is possible with a trigger.
When there are no parents with the same child left, delete the child:
CREATE TRIGGER t
AFTER DELETE ON Parent
WHEN NOT EXISTS (SELECT 1 FROM Parent WHERE ChildID = OLD.ChildID)
BEGIN
DELETE FROM Child WHERE ID = OLD.ChildID;
END;
Related
When executing the following query, is there a mechanism in place to return a list of the 'id's of the objects that have been deleted?
>>> MyModel.objects.all().delete()
>>> (430, {'myapp': 430})
I don't think its possible unless you delete objects one by one. From documentation:
Keep in mind that this will, whenever possible, be executed purely in SQL, and so the delete() methods of individual object instances will not necessarily be called during the process
Means MyModel.objects.all().delete()(or bulk delete operation) will be executed in SQL level, so each object will not be called.
If you want to delete objects one by one, then you can try like this:
deleted = []
for item in MyModel.objects.all():
deleted.append(item.id)
item.delete()
print(deleted)
But it will very inefficient solution and not recommended.
I have a model that retrieves data from a table in a database from a certain SQL query, and shows the items in a QTreeView. The characteristics are:
the data comes from a table, but has an underlying tree structure (some rows are parents that have rows below them as children)
this tree structure is shown in the QTreeView
the children are selectable in the QTreeView (not so the parents)
the table in the database gets updated continuously
in the updates, a children can be added to any existing parent
periodically (with a QTimer) the QTreeView is updated with the contents of the table
Since the children are added at any time to any parent, the first silly approach when updating the QTreeView is clearing it all, and append all the rows again, in form of parent or children, to the QTreeView. This is a 0-order approximation, and it is indeed terrible inefficient. In particular, the following problems appear:
Any existing selection is gone
Any expanded parent showing its children is collapsed (unless ExpandAll is active)
The view is reset to show the very first row.
What is the best solution to this problem? I mean, the first solution I will try will be not to clear the QTreeView, but instead parse all the returned rows from the table, and check for each of them whether the corresponding item in the QTreeView exists, and add it if not. But I wonder if there is a trickiest solution to engage a given table in a database with a QTreeView (I know this exists for a QTableView, but then the tree structure is gone).
This thread mentions a general approach, but this might get tricky quickly, but I am not sure how this would work if the underlying model is changing constantly (i.e. the QModelIndex becoming invalid).
Worst case is that you will have to write your own mechanism to remember the selection before updating and then re-applying it.
I assume you use some model/view implementation? You could enhance your model with a safe selection handling, in case the example mentioned above does not work for you.
I guess this is the case for a self-answer.
As I presumed, after a careful analysis of what data is retrieved from the database, I had to do the following "upgrades" to the retrieval code:
I retrieve, along with the fields I want to show in the view, two identifiers, one for grouping rows and one for sorting items into groups
I also retrieve the internal record ID (an increasing integer number) from the table in the database, in order to ask only for new records in the next retrieval.
In the model population code, I added the following:
I first scan the initial records that may belong to existing groups in the model
When, in the scanning, I reach the last group in the model, this implies that the rest of retrieved records belong to new groups (remember the records are retrieved sorted such that items that belong to the same group are retrieved together)
Then start creating groups and adding items to each group, until we use all the records retrieved.
Finally, it is very important:
the use beginInsertRows() and endInsertRows() before and after inserting new items in the model
capture the sorting status of the view (with sortIndicatorSection() and sortIndicatorOrder()) and re-apply this sorting status after updating the model (with sortByColumn())
Doing that the current position and selection in the QTreeView receiving the model updates are preserved, and the items in the view are added and the view updated transparently for the user.
I have a database which uses GUIDs instead of, say, an ordinary counter for the ID fields. But I can't seem to put NULL (instead of GUID_NULL) into such fields in DB even though. Yes, the field in the database does take NULL.
Let's say there is a parent-child relationship between two tables. So there is a parent and a child GUID references from one table to another. But the "root" parent does not have any parent and there I would like to be able to put NULL into its ParentUID database field. If I put GUID_NULL there then I will need a corresponding default row in the referenced table which has a GUID-value of GUID_NULL so that the foreign key constraint won't break.
Also, using GUID_NULL with default-rows at referenced tables will give me a resultset back when doing a standard join operation...which is not desirable.
They way it's done in code when inserting values into database is using a CCommand which takes structure that contains the values of the row fields to be inserted. One of these is a GUID type variable.
So it creates an SQL statement string looking like
INSERT INTO [tablename] (field1, field2, field3,...) VALUES(?,?,?,...)
and then in a loop there is something like:
command.field1 = 1;
command.field2 = 2;
command.GUIDField = ?????? //I want to put NULL here instead of GUID_NULL
command.Open(...);
I hope it is understandable what I wish to do and what the conditions in code are.
Thankful for help!
UPDATE:
Ok, it was very hard to exaplin correctly, but this is exactly what I want to do http://support.microsoft.com/kb/260900
Just that when I follow that example, it makes no difference...still I get FK constraint violation on insert so I suspect it is trying to insert GUID_NULL instead of NULL. :(
The link I had in my Update-section does work, my bad: http://support.microsoft.com/kb/260900
It is the answer to my problems, perhaps it will help someone else as well! :3
I have a little problem I am trying to figure out, I am working on a QT app that is using the QTreeView and I have a bunch of categories which have children, so they look like
Parent 1
Parent 2
- Child 1
- Child 2
Parent 3
and so on and so forth, so in my database I have rows which have all the regular details (name, id, date created, etc) and the ones which are children have their parent specified (so pid=parent row id). I then loop over then using the standard QSqlQuery stuff. But the problem I am running into is this...
Items are added to the treeview by QStandardItem* item = new QStandardItem(icon, name); and then appending the row model->appendRow(item); but my children need to call parentitem->appendRow(item); so the QStandardItem* item of the parent. But how can I find out what that is without storing every single item?
Moral of the story is, is there a way to do one of the following that won't destroy performance.
Store the QStandardItem* item in an array that I could reference the parent in the childs loop?
Assign an ID or something to QStandardItem* item which I could then reference when adding a child.
Generate a TreeView model from an array, where the children array elements get added as children?
Something else I haven't thought of...
I can't seem to find any good examples of QTreeView with children from a database.
All you need is a QMap<int, QStandardItem*> rowItemMap. When you retrieve a row from the database with a given row id, you immediately create an item and add it to the map. You then add it to a parent that you look up in the map. You'll need to create a dummy parent as the root item. There's nothing wrong with storing pointers to items. For reasonable amounts of items, it won't matter. If you think of storing more than 10k items, you may want to think of using a view that offers transitive closure up to a certain depth of the tree. It'd then be much easier to map such a view, via QSqlTableModel, directly onto the tree, without having to copy the entire tree from the database into a temporary model.
Heres my scenario: I was concerned about the SqlCeSyncClient applying deletes, then inserts, then updates. I have cases where a row may be de-referenced from another table, and then deleted. For example, imagine this:
I have two tables; Customer, and Area, of which Customer.Area references Area.Name with a Foreign key constraint
insert into Area values('Australia')
insert into Customer values('customer1','Australia')
-- Sync happens. Client gets 2 inserts.
update Customer set Area = 'New Zealand' where Area = 'Australia'
delete from Area where Name = 'Australia'
-- Sync happens. Client gets 1 update , and 1 delete
The SqlCeClientSyncProvider tries to apply the delete first, which it fails to do because of referential integrity constraints on the client.
My first question is: Why on earth did the boys at Microsoft code the SyncClient to process deletes FIRST when it breaks all referential integrity rules? Shouldn't they apply deletes LAST????
My next question is: I have managed to reverse the order by inspecting the code and writing the whole ApplyChanges method myself... but even when I do that the deletes are not applied. Is there some internal thing with datasets that means you can't change the order of processing?
The problem is not the order from operations ( delete, update, inserts, ...), but the order you placed your synctables...
You should have synced Area table first and after Customer table.