Can we use same serializer for POST, PUT & GET? - django

Can we use same serializer for creating, updating and getting a resource. is it a best practice to do so ?

Can we use same serializer for creating, updating and getting a resource.
Why, yes, of course. Even more than that, we can use the exact same serializer for partially updating (PATCH) and deleting (DELETE) a resource.
This is because the serializer doesn't actually "knows" about all of these operations, it only serializes and deserializes data -- it is the view that handles http methods.
is it a best practice to do so ?
It is most definitely not bad practice.
But is it good ? It really depends on what type of behaviour you are expecting for each of these, whether you have nested objects or not, etc.
I would strongly suggest you read more from the docs, especially about ModelSerializer.
Good luck.

Related

Is the DRF ModelSerializer faster or slower than the standard Serializer?

At work we tend to stay away from using the ModelSerializer in the Django Rest Framework. From what I have heard it is said to be faster in some respects. Is this the case?
And what are the advantages of using the standard serializer instead of the ModelSerializer?
ModelSerializer is used when you need to serialize a model while regular Serializer is used when you need to serialize certain information which might not be a model.
Check this article
Okay, now one of the main reasons that I have heard, after to talking to some people. Is that when it comes to using the ModelView, you have this issue where you have a number of API endpoints that can end up being redundant.
There is also a lot more to take care of too.

Django models multiple query handling exception

I have a view that saves data in multiple models, as there are numerous relations.
Model1.object.create(**name)
Model2.object.create(**name)
Model3.object.create(**name)
Currently im using try except for each model.
Is there a way to handle exception for all these in a better way?
A good way to handle that is using the Design by contract, instead of the Defensive programming one. In your case, that means you can verify the integrity of the data passed as argument and handle possible errors before calling the create method, making sure you only call those methods in situations were no error will occur. This way, there is no need for try-except

Is this abuse of a try/except?

This seems like it will do what it 'needs to do' but I get the sense that it's a bad shortcut. I mean we have all these pub-sub libraries for a reason, right?
def fakeMessagePasser(myFunction, listOfListeners):
for obj in listOfListeners:
try:
success = getattr(obj, myFunction)()
except AttributeError:
handleTheSituationCorrectly()
I know that Python prefers to ask forgiveness over permission, but if that's the case why do people ever bother with a 'complex' subscription-based object messaging library in the first place? It seems like the language is set up to handle this innately -- but as often happens, I may just have a big hole in my knowledge that would otherwise inform me as to why this is A Bad Thing.
Is this even good - or put another way, intended - application of a try/except? Like, if we were in a game loop and we saw something like this:
#incoming pseudocode, not based on anything in particular
for enemy in objectQueue:
if enemy.hasGoodGuyInSights():
try:
enemy.attack()
except AttributeError: ##maybe this object has no attack method, it just 'follows' or something, who knows why bad guys do anything really
handleTheSituationCorrectly()
This doesn't directly contribute to the death of a family member or anything, but is it good use of a try/except -- or maybe more to the point, is it considered 'pythonic' to do this in this way?
I ask because I feel as though I typically see try/except in place of type-checking: we want to treat objects as if they were of a certain type, and when that fails we handle it correctly. So it seems like there's a difference between using try/except to make sure we iterate over a list or a dict, versus using it to call methods and then failing/ignoring that 'not-a-message' correctly. Right?
What you describe can simulate pub-sub in a limited fashion--good enough I would think for a minimal testing framework. But you miss out on some essential aspects that you will need in production:
Asynchronous processing. In async frameworks such as Twisted, promises and callbacks are used to enforce the relative order of operations but allowing some operations to process while others wait.
Clustering. With your method everything must happen within one process/thread. If you use a broker to do pub-sub you can cluster without changing any of the worker code (assuming the worker code is designed correctly).

Django, polymorphism and N+1 queries problem

I'm writing an app in Django where I'd like to make use of implicit inheritence when using ForeignKeys. As far as I'm concerned the only way to handle this nicely is to use django_polymorphic library (no single table inheritence in Django, WHY OH WHY??).
I'd like to know about the performance implications of this solution. What kind of joins are performed when doing polymorphic queries? Does it have to hit the database multiple times as compared to regular queries (the infamous N+1 queries problem)? The docs warn that "the type of queries that are performed aren't handled efficiently by the modern RDBMs"? However it doesn't really tell what those queries are. Any statistics, experiences would be really helpful.
EDIT:
Is there any way of retrieving a list of objects, each being an instance of its actual class with a constant number of queries ?? I thought this is what the aforementioned library does, however now I got confused and I'm not that certain anymore.
Django-Typed-Models is an alternative to Django-Polymorphic which takes a simple & clean approach to solving the single table inheritance issue. It works off a 'type' attribute which is added to your model. When you save it, the class is persisted into the 'type' attribute. At query time, the attribute is used to set the class of the resulting object.
It does what you expect query-wise (every object returned from a queryset is the downcasted class) without needing special syntax or the scary volume of code associated with Django-Polymorphic. And no extra database queries.
In Django inherited models are internally represented through an OneToOneField. If you are using select_related() in a query Django will follow a one to one relation forwards and backwards to include the referenced table with a join; so you wouldn't need to hit the database twice if you are using select_related.
Ok, I've digged a little bit further and found this nice passage:
https://github.com/bconstantin/django_polymorphic/blob/master/DOCS.rst#performance-considerations
So happily this library does something reasonably sane. That's good to know.

Is RPC disguised as REST a bad idea?

Our whole system is being designed around REST and are now considering how processes which are quite clearly RPC in intent can be mapped to RESTful resources without using verbs in the URL. Our remote procedure call is used to rebuild our search index when a content listing has been modified elsewhere.
What we are thinking about doing is this:
POST /index_updates
<indexUpdate><contentId>123</contentId></indexUpdate>
Nothing wrong with that in itself, but the smell is this resource which has been created does not return the URL of the newly created resource e.g. /index_updates/1234 which we can then access with a GET.
The indexing engine we are using does have a log mechanism, so in theory we could return a URL to a index_update resource so as to allow a GET to retrieve the resource, but to be honest we're not interested in the resource as this is nothing more than an RPC in disguise.
So my question is whether RESTfulness is expressed in structure or intent. I feel the structure of what I have outlined is restful, but the intent is not.
Does anyone have an comments or advice?
Thanks,
Chris
Use the right tool for the job. In this case, it definitely seems like the right tool is a pure remote procedure call, and there's no reason to pretend it's REST.
One reason you might return a new resource identifier from your POST /index_updates call is to monitor the status of the operation.
POST /index_updates
<contentId>123</contentId>
201 Created
Location: /index_updates/a9283b734e
GET /index_jobs/a9283b734e
<index_update><percent_complete>89</percent_complete></index_update>
This is obviously a subjective field, but GET PUT POST DELETE is a rich enough vocabulary to describe anything. And when I go to non-English-speaking Asian countries I just point and they know what I mean since I don't speak the language... but it's hard to really get into a nice conversation with someone...
It's not a bad idea to disguise RPC as REST, since that's the whole exercise. Personally, I think SOAP has been bashed and hated while in fact it has many strengths (and with HTTP compression, HTTP/SSL, and cookies, many more strengths)... and your app is really exposing methods for the client to call. Why would you want to translate that to REST? I've never been convinced. SOAP lets you use a language that we know and love, that of the programming interface.
But to answer your question, is it a bad idea to disguise RPC as REST? No. Disguising RPC as REST and translating to the four basic operations is what the thing is about. Whether you think that's cool or not is a different story.