Delete operation using Serializes - django

In official document serializers class show only create & update method only. There is any way to perform delete method? is say Yes how? if say No why?

Here is the source code of serializer. https://github.com/encode/django-rest-framework/blob/master/rest_framework/serializers.py
As you can see there is no delete. I think the reason is because there is nothing to serialize/deserialze when you make a deletion request.
Think about the what it means by serialization and deserialization. It's a process of turning object in memory to a string representation (or the other way around). When we make a request to delete /Foo/5, there's no string representation nothing to deserialize.
If you want custom behavior during deletion you can override delete() in viewset.

Related

Getting type of serialized flatbuffer

I am looking for a way to tell what kind of object is in serialized form. The reason is, that I want to use object API on some of the objects, and standard flatbuffers on other. Is there any way to do it without creating another base object for both situations?
You can use the file_identifier functionality in the schema, and then calls in the API for testing the presence of that identifier.

What exactly is a safe method in REST web services?

I am absolutly new in REST and I have the following doubt about what are safe method in REST and what are idempotent method.
I know (but it could be wrong) that GET, HEAD, OPTIONS and TRACE methods are defined as safe because they are only intended for retrieving data.
But now I am reading this article: http://restcookbook.com/HTTP%20Methods/idempotency/ and it say that:
Safe methods are HTTP methods that do not modify resources. For
instance, using GET or HEAD on a resource URL, should NEVER change the
resource.
And untill here it is ok, it is nothing different from what I yek know, but after it assert that:
However, this is not completely true. It means: it won't change the
resource representation. It is still possible, that safe methods do
change things on a server or resource, but this should not reflect
in a different representation.
What exactly means this assertion? What exactly is a representation? and what means that a safe method so change on a resource but that this change is not refleted into a different representation?
Then it does also this example:
GET /blog/1234/delete HTTP/1.1
and say that it is incorrect if this would actually delete the blogpost and assert that:
Safe methods are methods that can be cached, prefetched without any
repercussions to the resource.
What exactly is a representation?
A "representation" is the data that is returned from the server that represents the state of the object. So if you GET at http://server/puppy/1 it should return a "representation" of the puppy (because, it can't return the actual puppy of course.)
However, this is not completely true. It means: it won't change the
resource representation. It is still possible, that safe methods do
change things on a server or resource, but this should not reflect in
a different representation.
What exactly means this assertion?
They mean that if you GET /server/puppy/1 two times in a row, it should give you the same response. However, imagine you have a field that contains the number of times each puppy was viewed. That field is used to provide a page listing the top 10 most viewed puppies. That information is provided via GET /server/puppystats. It is okay for GET /server/puppy/1 to update that information. But it should not update information about the puppy itself. Or, if it DOES update the information about the puppy itself, that information is not part of the representation of the puppy returned by GET /server/puppy/1. It is only part of some other representation that is available via another URL.
If it helps, this is a similar concept to the "mutable" keyword in C++ when applied to a const object. "mutable" allows you to modify the object, but it should not modify it in a way that is visible outside of the class.

C++ Design: Pool and pointers VS client-server

I'm designing a software tool in which there's an in-memory model, and the API user can get objects of the model, query them and set values.
Since all the model's objects belong to a single model, and most operations must be recorded and tested, etc., each created object must be registered to the Model object. The Model stores all objects as std::unique_ptr since it's the only owner of them. When needed, it passes raw pointers to users.
What makes me worry is the possibility that the user calls delete on these pointers. But if I use std::shared_ptr, the user can still use get() and call delete on that. So it's not much safer.
Another option I though of is to refer to objects by a name string, or pass ObjectReference objects instead of the real objects, and then these ObjectReferences can be destroyed without affecting the actual stored object.
These References work somewhat like a client: You tell them what to do, and they forward the request to the actual object. It's a lot of extra work for the developer, but it protectes the pointers.
Should I be worried about the pointers? Until now I was using smart pointers all the time, but now I need to somehow allow the user to access objects managed by a central model, without allowing the user to delete them.
[Hmmm... maybe make the destructor private, and let only the unique_ptr have access to it through a Deleter?]
You shouldn't bother about users calling delete on your objects. It's one of those things that are perfectly fine as a documented constraint, any programmer violating that only deserves whatever problem he runs into.
If you still really want to explicitly forbid this, you could either write a lightweight facade object that your users will pass by value (but it can be lot of work depending on the number of classes you have to wrap) or, as you said, make their destructor private and have unique_ptr use a friend deleter.
I for one am not fond of working through identifiers only, this can quickly lead to performance issues because of the lookup times (even if you're using a map underneath).
Edit: Now that I think of it, there is a way in between identifiers and raw pointers/references: opaque references.
From the point of view of the users, it acts like an identifier, all they can do is copy/move/assign it or pass it to your model.
Internally, it's just a class with a private pointer to your objects. Your model being a friend of this class, it can create new instances of the opaque reference from a raw pointer (which a user can't do), and use the raw pointer to access the object without any performance loss.
Something along the lines of:
class OpaqueRef
{
// default copy/move/assignment/destructor
private:
friend class Model;
Object* m_obj;
OpaqueRef(Object& obj) : m_obj(&obj) {}
};
Still, not sure if it's worth the trouble (I stand by my first paragraph), but at least you got one more option.
Personally, I'd keep the internal pointer in the model without exposing it and provide an interface via model ids, so all operations go through the interface.
So, you could create a separate interface class that allows modification of model attributes via id. External objects would only request and store the id of the object they want to change.

Django & Soft Deletion: Implementation architecture

Definitions
SOFT DELETE - does not remove an object from the database, but appears to do so
HARD DELETE - removes the object completely from the database
Question
What is the best way to implement soft deletion in a codebase (specifically, a Django project)?
The simplest method I think would be to simply add:
is_deleted = models.BooleanField(default=False)
to a superclass which implements a softDeleteObject, and then override delete() to set the appropriate flags on the objects in question. Related objects would also need to inherit from the same superclass.
An alternative would be instead to delete the original, and have what amounts to an Archive object which is a representation of the deleted objects.
Analysis
The first appears to have a greater degree of simplicity, but does require some wide-ranging overrides - for example, User would have to be overridden to ensure that the foreign key relations of all deleted objects still held, and that deleting a User then didn't hard delete all their soft-deleted objects.
The second could be implemented with pre_delete signals which trigger creation of the surrogate objects. This again has some advantages (don't need to override all the delete() methods), but does require the implementation of archived versions of the models used in the project.
Which is preferable, and are there other alternatives?
Why not to use active/deleted/status flags on the specific models where it is needed and do it this way? Or check the app django-reversion, there is probably everything you will need ;)

C++ class design from database schema

I am writing a perl script to parse a mysql database schema and create C++ classes when necessary. My question is a pretty easy one, but us something I haven't really done before and don't know common practice. Any object of any of classes created will need to have "get" methods to populate this information. So my questions are twofold:
Does it make sense to call all of the get methods in the constructor so that the object has data right away? Some classes will have a lot of them, so as needed might make sense too. I have two constrcutors now. One that populates the data and one that does not.
Should I also have a another "get" method that retrieves the object's copy of the data rather that the db copy.
I could go both ways on #1 and am leaning towards yes on #2. Any advice, pointers would be much appreciated.
Ususally, the most costly part of an application is round trips to the database, so it would me much more efficient to populate all your data members from a single query than to do them one at a time, either on an as needed basis or from your constructor. Once you've paid for the round trip, you may as well get your money's worth.
Also, in general, your get* methods should be declared as const, meaning they don't change the underlying object, so having them go out to the database to populate the object would break that (which you could allow by making the member variables mutable, but that would basically defeat the purpose of const).
To break things down into concrete steps, I would recommend:
Have your constructor call a separate init() method that queries the database and populates your object's data members.
Declare your get* methods as const, and just have them return the data members.
First realize that you're re-inventing the wheel here. There are a number of decent object-relational mapping libraries for database access in just about every language. For C/C++ you might look at:
http://trac.butterfat.net/public/StactiveRecord
http://debea.net/trac
Ok, with that out of the way, you probably want to create a static method in your class called find or search which is a factory for constructing objects and selecting them from the database:
Artist MJ = Artist::Find("Michael Jackson");
MJ->set("relevant", "no");
MJ->save();
Note the save method which then takes the modified object and stores it back into the database. If you actually want to create a new record, then you'd use the new method which would instantiate an empty object:
Artist StackOverflow = Artist->new();
StackOverflow->set("relevant", "yes");
StackOverflow->save();
Note the set and get methods here just set and get the values from the object, not the database. To actually store elements in the database you'd need to use the static Find method or the object's save method.
there are existing tools that reverse db's into java (and probably other languages). consider using one of them and converting that to c++.
I would not recommend having your get methods go to the database at all, unless absolutely necessary for your particular problem. It makes for a lot more places something could go wrong, and probably a lot of unnecessary reads on your DB, and could inadvertently tie your objects to db-specific features, losing a lot of the benefits of a tiered architecture. As far as your domain model is concerned, the database does not exist.
edit - this is for #2 (obviously). For #1 I would say no, for many of the same reasons.
Another alternative would be to not automate creating the classes, and instead create separate classes that only contain the data members that individual executables are interested in, so that those classes only pull the necessary data.
Don't know how many tables we're talking about, though, so that may explode the scope of your project.