Django & Soft Deletion: Implementation architecture - django

Definitions
SOFT DELETE - does not remove an object from the database, but appears to do so
HARD DELETE - removes the object completely from the database
Question
What is the best way to implement soft deletion in a codebase (specifically, a Django project)?
The simplest method I think would be to simply add:
is_deleted = models.BooleanField(default=False)
to a superclass which implements a softDeleteObject, and then override delete() to set the appropriate flags on the objects in question. Related objects would also need to inherit from the same superclass.
An alternative would be instead to delete the original, and have what amounts to an Archive object which is a representation of the deleted objects.
Analysis
The first appears to have a greater degree of simplicity, but does require some wide-ranging overrides - for example, User would have to be overridden to ensure that the foreign key relations of all deleted objects still held, and that deleting a User then didn't hard delete all their soft-deleted objects.
The second could be implemented with pre_delete signals which trigger creation of the surrogate objects. This again has some advantages (don't need to override all the delete() methods), but does require the implementation of archived versions of the models used in the project.
Which is preferable, and are there other alternatives?

Why not to use active/deleted/status flags on the specific models where it is needed and do it this way? Or check the app django-reversion, there is probably everything you will need ;)

Related

Delete operation using Serializes

In official document serializers class show only create & update method only. There is any way to perform delete method? is say Yes how? if say No why?
Here is the source code of serializer. https://github.com/encode/django-rest-framework/blob/master/rest_framework/serializers.py
As you can see there is no delete. I think the reason is because there is nothing to serialize/deserialze when you make a deletion request.
Think about the what it means by serialization and deserialization. It's a process of turning object in memory to a string representation (or the other way around). When we make a request to delete /Foo/5, there's no string representation nothing to deserialize.
If you want custom behavior during deletion you can override delete() in viewset.

What are the downsides of a QAbstractListModel containing objects in QML?

Qt offers the possibility to combine C++ models with QML and suggests three approaches in the docs:
QStringList
QObjectList
QAbstractItemModel
The two former are extremely simple to use, e.g. QObjectList:
// in C++
QList<QObject*> dataList;
dataList.append(new DataObject("Item 1", "red"));
// in QML
ListView {
model: dataList
delegate: Text { text: name }
}
but they both come with a strong caveat:
Note: There is no way for the view to know that the contents of a
QList has changed. If the QList changes, it is necessary to reset the
model [...]
QAbstractItemModel is difficult to use with objects because the objects properties are not directly exposed and therefore keeping them in sync takes quite a bit of effort.
However, it is possible to wrap a QList in a QAbstractItemModel and obtain a super simple model. See here: Implementation 1, Implementation 2
Is there a reason why Qt does not implement this natively? Performance? Memory management issues? It seems such an obviously good idea and with ObjectModel they already implement something similar.
The one prominent downside of the usage of QObject as a model item is because the base class is pretty big, it is kind of a "god object" (which is an anti-pattern) that contains a whole lot of stuff you don't really need most of the time. As a result, it has about 160 bytes of "overhead" on top of any model data that you may have. Which may be problematic if you have a big model with lots of items, and the items themselves are relatively small. You end up with a lot of overhead.
A QObjectList as a model is always a bad idea, unless you are doing something entirely trivial. Since it doesn't implement the proper interface to notify referencing views of changes, the only way is to force an update, which will redraw the entire model each time rather than just the changes.
There is no requirement on what item objects are, as long as you implement the model properly.
The second implementation in particularly useful for a number of reasons:
you don't need to bother with implementing a specific "static" model with fixed roles for each and every usage scenario
your model items can have fundamentally different properties, you are not limited to a model "schema"
you automatically get notifications for bindings in QML since you are dealing with QObject and Q_PROPERTY
you can define models in declarative, and you can even nest models to create tree structures, which you cannot do with ListModel.
you can define the actual model items in pure QML without having to recompile all the time, a.k.a rapid prototyping, and when done, you can simply port the objects to C++
at the same time, on top of all the advantages, the model is actually much simpler to implement and maintain than a regular "rigid" model, the role lookup is faster, since you essentially have a single object role and no lookup whatsoever, there is no need to implement data change signals for roles and so on... easy peasy

QWizard: QWizardPage::registerField vs shared object pointer

Looking at the Qt doc, the correct way to handle with objects shared between pages is to use QWizardPage::registerField and QWizardPage::field.
I personally think is more simple, since we are under C++, to pass to the QWizardPage(s), in their constructors, a pointer to my shared object, since there's no risk on cuncurrent access on the shared resource. Every QWizardPage change the value of that object safely and it's shared between pages because the pointer location is the same.
What am I missing? Why the need of such methods?
They are different approaches:
With a shared pointer you need a member for each object you want to share, which means you need to change the interface of your classes.
With the field-API you don't change the interface, but it is then not defined in the interface what fields exist. This means you should document them separately. This seems to me is the better way when having a multitude of fields.
Also note the automatic validation by the wizard:
If an asterisk (*) is appended to the name when the property is registered, the field is a mandatory field. When a page has mandatory fields, the Next and/or Finish buttons are enabled only when all mandatory fields are filled.
To consider a field "filled", QWizard simply checks that the field's current value doesn't equal the original value (the value it had when initializePage() was called). For QLineEdit and QAbstractSpinBox subclasses, QWizard also checks that hasAcceptableInput() returns true, to honor any validator or mask.
As you see: it's mainly a convenience feature. And it might save you from recompiling lots of stuff when working with bigger projects.
As you see: it's mainly a convenience feature. And it might save you from recompiling lots of stuff when working with bigger projects.

C++ Design: Pool and pointers VS client-server

I'm designing a software tool in which there's an in-memory model, and the API user can get objects of the model, query them and set values.
Since all the model's objects belong to a single model, and most operations must be recorded and tested, etc., each created object must be registered to the Model object. The Model stores all objects as std::unique_ptr since it's the only owner of them. When needed, it passes raw pointers to users.
What makes me worry is the possibility that the user calls delete on these pointers. But if I use std::shared_ptr, the user can still use get() and call delete on that. So it's not much safer.
Another option I though of is to refer to objects by a name string, or pass ObjectReference objects instead of the real objects, and then these ObjectReferences can be destroyed without affecting the actual stored object.
These References work somewhat like a client: You tell them what to do, and they forward the request to the actual object. It's a lot of extra work for the developer, but it protectes the pointers.
Should I be worried about the pointers? Until now I was using smart pointers all the time, but now I need to somehow allow the user to access objects managed by a central model, without allowing the user to delete them.
[Hmmm... maybe make the destructor private, and let only the unique_ptr have access to it through a Deleter?]
You shouldn't bother about users calling delete on your objects. It's one of those things that are perfectly fine as a documented constraint, any programmer violating that only deserves whatever problem he runs into.
If you still really want to explicitly forbid this, you could either write a lightweight facade object that your users will pass by value (but it can be lot of work depending on the number of classes you have to wrap) or, as you said, make their destructor private and have unique_ptr use a friend deleter.
I for one am not fond of working through identifiers only, this can quickly lead to performance issues because of the lookup times (even if you're using a map underneath).
Edit: Now that I think of it, there is a way in between identifiers and raw pointers/references: opaque references.
From the point of view of the users, it acts like an identifier, all they can do is copy/move/assign it or pass it to your model.
Internally, it's just a class with a private pointer to your objects. Your model being a friend of this class, it can create new instances of the opaque reference from a raw pointer (which a user can't do), and use the raw pointer to access the object without any performance loss.
Something along the lines of:
class OpaqueRef
{
// default copy/move/assignment/destructor
private:
friend class Model;
Object* m_obj;
OpaqueRef(Object& obj) : m_obj(&obj) {}
};
Still, not sure if it's worth the trouble (I stand by my first paragraph), but at least you got one more option.
Personally, I'd keep the internal pointer in the model without exposing it and provide an interface via model ids, so all operations go through the interface.
So, you could create a separate interface class that allows modification of model attributes via id. External objects would only request and store the id of the object they want to change.

C++ class design from database schema

I am writing a perl script to parse a mysql database schema and create C++ classes when necessary. My question is a pretty easy one, but us something I haven't really done before and don't know common practice. Any object of any of classes created will need to have "get" methods to populate this information. So my questions are twofold:
Does it make sense to call all of the get methods in the constructor so that the object has data right away? Some classes will have a lot of them, so as needed might make sense too. I have two constrcutors now. One that populates the data and one that does not.
Should I also have a another "get" method that retrieves the object's copy of the data rather that the db copy.
I could go both ways on #1 and am leaning towards yes on #2. Any advice, pointers would be much appreciated.
Ususally, the most costly part of an application is round trips to the database, so it would me much more efficient to populate all your data members from a single query than to do them one at a time, either on an as needed basis or from your constructor. Once you've paid for the round trip, you may as well get your money's worth.
Also, in general, your get* methods should be declared as const, meaning they don't change the underlying object, so having them go out to the database to populate the object would break that (which you could allow by making the member variables mutable, but that would basically defeat the purpose of const).
To break things down into concrete steps, I would recommend:
Have your constructor call a separate init() method that queries the database and populates your object's data members.
Declare your get* methods as const, and just have them return the data members.
First realize that you're re-inventing the wheel here. There are a number of decent object-relational mapping libraries for database access in just about every language. For C/C++ you might look at:
http://trac.butterfat.net/public/StactiveRecord
http://debea.net/trac
Ok, with that out of the way, you probably want to create a static method in your class called find or search which is a factory for constructing objects and selecting them from the database:
Artist MJ = Artist::Find("Michael Jackson");
MJ->set("relevant", "no");
MJ->save();
Note the save method which then takes the modified object and stores it back into the database. If you actually want to create a new record, then you'd use the new method which would instantiate an empty object:
Artist StackOverflow = Artist->new();
StackOverflow->set("relevant", "yes");
StackOverflow->save();
Note the set and get methods here just set and get the values from the object, not the database. To actually store elements in the database you'd need to use the static Find method or the object's save method.
there are existing tools that reverse db's into java (and probably other languages). consider using one of them and converting that to c++.
I would not recommend having your get methods go to the database at all, unless absolutely necessary for your particular problem. It makes for a lot more places something could go wrong, and probably a lot of unnecessary reads on your DB, and could inadvertently tie your objects to db-specific features, losing a lot of the benefits of a tiered architecture. As far as your domain model is concerned, the database does not exist.
edit - this is for #2 (obviously). For #1 I would say no, for many of the same reasons.
Another alternative would be to not automate creating the classes, and instead create separate classes that only contain the data members that individual executables are interested in, so that those classes only pull the necessary data.
Don't know how many tables we're talking about, though, so that may explode the scope of your project.