This is not a question for a particular use case, but for something I noticed in my experience doing APIs, specifically with using Django and Django Rest Framework.
Months ago I had a problem with the API I maintain for a client's project.
Let's say we have the following model:
class Person:
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
Then, the corresponding serializer:
class PersonSerializer(serializers.ModelSerializer):
class Meta:
model = Person
fields = '__all__'
Of course, its corresponding ViewSet and route pointing to it:
http://localhost:8000/api/v1/persons/
Note this is the 1st version of my API.
Everything OK at this point, right?
Now, my client asked that we need to receive person's fullname instead of first and last name separately...
As supposed, I'll have to change my model to be:
class Person:
full_name = models.CharField(max_length=200)
There are 3 different clients (mobile apps) using this version of the API. Obviously I don't want to change my API, instead I will want to put the new approach in a new version.
BUT the 1st version's Serializer is coupled with the Model, so at this point the 1st version of the API already changed
What I expect to read in answers below is how you guys deal with this problem in Django and what is the way I should take for my next projects using the same stack.
Edit: My question's objetive is to understand if is better to decouple API from Models. I've put a very very basic example, but there are cases where things get much more complicated. For example I needed to modify a M2M relation to use the through option in order to add more fields to the intermediate table.
It's type of question that could been flagged as "Recommend smth", but whatever.
First of all you need to extend your model with full_name field. If you need to be able to write to model provided full_name from your api then you need to extend with field otherwise you can deal property
#property
def full_name(self):
return '{} {}'.format(self.first_name, self.last_name)
Then you can also include field in serializer
class PersonSerializer(serializers.ModelSerializer):
full_name = serializers.CharField() # if you have property
class Meta:
model = Person
fields = '__all__'
Since you class field name and serializer field name would match you don't need to worry.
So having additional field won't break your clients and satisfy your big client.
It's been a while since you asked this, but it's an interesting question I've been contemplating on recently with the different versions of the OCPI protocol. What you describe here is basically one of the trickier situations, where a data structure needs to be refactored while retaining existing endpoints.
Copying shamelessly parts of the response from #vishes_shell, I would suggest you hide your changing data, provide properties to access it, and split your corresponding serializers and endpoints into two distinct versions.
#property
def full_name(self):
return self._full_name or '{} {}'.format(self._first_name, self._last_name)
#property
def first_name(self):
return self._first_name or self.parse_first_name(self._full_name)
#property
def last_name(self):
return self._last_name or self.parse_last_name(self._full_name)
class PersonV1Serializer(serializers.Serializer):
first_name = serializers.CharField(max_length=100)
last_name = serializers.CharField(max_length=100)
class PersonV2Serializer(serializers.Serializer):
full_name = serializers.CharField(max_length=200)
Create separate views:
http://localhost:8000/api/v1/persons/
http://localhost:8000/api/v2/persons/
This way you have your original and new endpoints working while using shared data. If you wish, you can migrate the data for first_name and last_name into full_name, and use a simpler logic for v1. That might be good for performance if you assume all clients are migrating eventually to v2, but I don't see a big difference here.
You can also find multiple different schemes for endpoint versioning in Django Rest at https://www.django-rest-framework.org/api-guide/versioning/.
Please note that self.parse_first_name() and self.parse_last_name() (which I leave as an exercise for the reader) need to be able to handle missing names. The ModelSerializer is no longer used, since it cannot determine the data types for properties, so we need to provide them explicitly.
Hope this helps all who are battling with API versioning.
we are adding api folder in django app folder, where each file like version_1_1_1.py and version_1_2_2.py represent different versions. I think other guys already suggested to use different Serializers this is right approach. I think that Your question is related to business requirements, You need to create software specification I briefly explained here https://www.itstartechs.com/software-specification which cover all features needed and update it if client request sth new.
Another approach maybe to start using SOA Service Oriented Architecture, if You are able to define problem domains of Your project that You see You can easily decoupled this is a good sign. After You have microservices solving problem domain A and B, You create fastapi or django rest service that cover api. Each api has it's own interface and needs, different things needed by mobile app and admin management app. This is very nice article explaining about api gateways https://microservices.io/patterns/apigateway.html.
Good thing to have is to design open api specification https://swagger.io/specification/, maybe is a good way for You to start with if You don't have software specification.
Related
I need to capture some fairly complicated database changes from my users, including both updating and creating objects for multiple models.
I feel like the obvious way to do this would be by leveraging a sizeable amount of Javascript to create a JSON object containing all the necessary changes that can be POSTed in a single form. I am not keen on this approach as it prevents me from utilizing Django's CreateView and UpdateView classes, as well as the validation that comes with them. Also I am more comfortable in Python than Javascript.
I want to use a series of form POSTs to build up the necessary changes over time, but also need the transaction to be atomic, which, as far as I know, is not possible in Django. Another complication is that the models contain non-nullable fields and I would need to create objects before capturing the user input required to fill them. I do not want to make these fields nullable or use placeholders as this would make it more difficult to validate.
One approach I am considering is to create a duplicate of each of the necessary models to store partial objects. All fields would be nullable so the objects could be updated a bit at a time until all the forms have been POSTed. Objects in the original (main) model could then be created or updated to match the ones in the new (partial) model, which could then be deleted.
class Product(models.Model):
field_a = models.CharField(max_length=255)
field_b = models.PositiveIntegerField()
class PartialProduct(models.Model):
field_a = models.CharField(max_length=255, blank=True, null=True)
field_b = models.PositiveIntegerField(blank=True, null=True)
The benefits of this approach as I see are:
A multi-form approach, leveraging Django's model forms and related views as well as model validation.
Not polluting the main models with incomplete objects.
Enforcing fields not being null in the main models.
The potential drawbacks I can see are:
Duplicating any changes to the main model in the partial model (the approach is not DRY).
It is a somewhat complicated approach (Simple is better than complex)
Are there any drawbacks to using this approach that I have not foreseen, or is there a better one I could use?
Let's imagine a I have a simple model Recipe:
class Recipe(models.Model):
name = models.CharField(max_length=constants.NAME_MAX_LENGTH)
preparation_time = models.DurationField()
thumbnail = models.ImageField(default=constants.RECIPE_DEFAULT_THUMBNAIL, upload_to=constants.RECIPE_CUSTOM_THUMBNAIL_LOCATION)
ingredients = models.TextField()
description = models.TextField()
I would like to create a view listing all the available recipes where only name, thumbnail, preparation_time and first 100 characters of description will be used. In addition I will have a dedicated view to render all remaining details for a single recipe.
From the efficiency point of view, since description may be a long text, would it make sense to store the extra information in a separate model, let's say 'RecipeDetails' which would not be extracted in a list view but only in a detailed view (maybe using prefetch_related method)? I am thinking about something along:
class Recipe(models.Model):
name = models.CharField(max_length=constants.NAME_MAX_LENGTH)
preparation_time = models.DurationField()
thumbnail = models.ImageField(default=constants.DEFAULT_THUMBNAIL, upload_to=constants.CUSTOM_THUMBNAIL_LOCATION)
description_preview = models.CharField(max_length=100)
class RecipeDetails(models.Model):
recipe = models.OneToOneField(Recipe, related_name="details", primary_key=True)
ingredients = models.TextField()
description = models.TextField()
In my recent online searches people seem to suggest that OneToOneField should be used only for two purposes: 1. inheritance and 2. extending existing models. In other cases two models should be merged into one. This may suggest I am missing something here. Is this a reasonable use of OneToOneField or does it only add to a complexity of an overall design?
inheritance
Don't do that, because inheritance would only be useful if you have baseclass/subclass relationship. The classic example is animal and cat/dog, in which the cats/dogs all have some basic properties that could be extracted, but your Recipe and RecipeDetail don't.
From the efficiency point of view, since description may be a long
text, would it make sense to store the extra information in a separate
model
Storing extra information in a separate model doesn't improve any efficiency. The underline database would create something like a ForeignKey field and plus unique=True to make sure the uniqueness. As far as I concerned, OneToOneField is only useful when your original model is hard to change, e.g., it is from third-party packages or some other awkward situations. Otherwise I still consider adding them to the Recipe model. In this case, you can manage your model easily while avoiding having some extra lookups like recipe.recipedetail.description, you can just do recipe.description.
No, it's not reasonable to split your Recipes. First, your model should contain all properties for being a "Recipe" (and a recipe without ingredients is not a recipe at all). Second, if you want to improve performance, then use the Django's Cache Framework (it was created exactly for improving performance issues). Third, keep it simple and do not over-engineering your development cycle. Do you really need to improve performance right now?
Hope it helps!
First mistake in development, you are thinking in efficiency before your first version is running.
Try to have now a first version, that runs, and later you can think in be more faster based in use cases with your first version. After this you can check if a model and relations, or only a new field in model or using Django Cache for views can do the work.
Your think in efficiency first will be "de-normalize" your Database btw, when one update in the model with full description is done, you need to launch one update to the model with "description-preview" field. trigger in database level? python code for update in app level? nightmares in code design ... before your code runs.
I'm trying to figure out how how to track changes for a foreignkey relationship in Django using Django-reversion.
In short, I am trying to model a Codelist, which contains Codes which only belong to one Codelist. This can be modelled using a foreign key like so:
class CodeList(models.Model):
name = models.CharField(max_length=100)
class Code(models.Model):
value = models.PositiveIntegerField(max_length=100)
meaning = models.CharField(max_length=100)
codelist = models.ForeignKey(CodeList,related_name="codes")
Additionally, the only way to edit a code is by using an inline form in the admin site accessed via its codelist. For all intents and purposes, codes belong to codelists as they should...
Except when it comes to reversion.
I'm using the reversion.middleware.RevisionMiddleware to track all editing changes, as there are some non-admin forms for editing codes.
What I'd like is when I see the history of a codelist, it should changes to the codes as well, but I can't figure that out in the Django-reversion API. The issue is that the API covers tracking the code, and seeing changes to the codelist, not the other way around by following the reversed relationship.
Is anyone aware of how this might be done?
Its not well documented Its very well documented, I just couldn't find it, but you can just add the inverse relationship as the field to follow like so:
reversion.register(CodeList, follow=["codes"])
I need an elegant way of disabling or authorizing related field traversal in Django templates.
Imagine following setup for models.py:
class Person(models.Model):
pass
class Secret(models.Model):
owner = models.ForeignKey(Person, related_name="secrets")
Now imagine this simple view that gives the template QuerySet of all Person instances in the system just so the template could put them in a list.
def show_people(request):
render_to_response("people.html", {people=Person.objects.all()})
Now my problem is that I would not provide the templates myself in this imaginary system and I don't fully trust those who make the templates. The show_people view gives the people.html template the secrets of the Person instances through the related_name="secrets". This example is quite silly but in reality I have model structures where template providers could access all kind of vulnerable data through related managers.
The obvious solution would be not to give models to templates but to convert them in to some more secure data objects. But that would be pain in my case because the system is already quite big and it's up and running.
I think a cool solution to this would be somehow preventing related field traversal in templates. Another solution would be to have such custom related managers that could have access to the request object and filter the initial query set according to the request.user.
A possible solution could be to use a custom model.Manager with your related models.
Set use_for_related_fields = True to force Django to use it instead of the plain manager. modify the manager to filter the data as needed.
also have a look at this:
Django: using managers for related object access (use_for_related_fields docs)
stackoverflow: use_for_related_fields howto, very good explanation here.
I'm developing an SAAS and having the hardest time wrapping my brain around why I need to use "User" for anything other than myself. I don't know why but it makes me queezy to think that I, as the developer/admin of the entire software, with full Django Admin access (like the Eye of Sauron), have the same type of User object as an "Account" holder's "UserProfile" has. Please help me understand why this is necessary.
Example:
class Account(models.Model): # represents copporate customer
admin = models.ForeignKey(User)
# other fields ...
class UserProfile(models.Model):
user = models.ForeignKey(User)
account = models.ForeignKey(Account)
It feels like I'm mingling the builtin Admin functionality with my account holders' users' functionality. Is this just for purposes of reusing elements like request.user, etc.?
Well, reuse of code and functionality might be a happy side-effect, but fundamentally I don't think this is broken.
A User represents someone using your website. At the base level it doesn't matter who that person is or what features or functionality they need - just that they make requests and can be identified in some way.
Further functionality can be added in different layers, either through built in components like Groups or Permissions, or through something else you build on top yourself as you are doing in your example.