Django, where should the code go? views or models - django

I have a bunch of REST views and a bunch of data transfers posted from templates that needs to be cleaned. I am wondering where to put all the sanitizing code and processing code should go: views.py or models.py ? Am I missing some secret tricks that I don't know about? (like usage of model.manager) ?
Currently I kept the models clean with object creation and update. Views is nicely organized with strictly REST views. But as the project grows, more and more code, especially handling of data that has accumulated. (views.py file had grown in size to 150lines) Should I put them in another file. If so, is that ok to pass the whole "request" to that file? (along with having to import models and possibly sessions and messages)

I have implemented my django apps like this:
In models.py: just add the definition of the models and some code usefull for the Signals because in my application will work with other models
In view.py: here i add the code usefull for manage the form for the frontend and create the json necessary for the API call to external app
of course you can split the code in other files and call the functions when needed like fuctions.py or options.py
from .functions import calc
External app with Rest Framework: here is the CORE of the application, where i'll call the models, where all the logic is available
This is how i use it, by the way is always better use the view.py for write the code.
Some Tips about The Model View Controller pattern in web applications

Related

App Design in Django

I have a general algorithm design question. I am creating a Django app that will connect to an API, but I won't be storing these results (at least not at first). After I retrieve the data from the API I manipulate it accordingly and have already created a class with numerous methods to do this.
Should the programming logic for this be performed in the model or the view for the Django framework? Is one more sustainable than the other (e.g. in a few months I decide to store the information). Also, is it best to encapsulate my class in its own file, and them import into the model/view?
Thanks!
Rob
If you don't need to store data, don't use Djnago's built in models.
Write views and import your own modules/classes.
Bonus: If your views share a lot of logic (probably related to request/response handling), use class based views and write a common MyBaseView.

Do I need another views file for my classes?

I am trying to make a calendar application. So I would need to make some classes to display the calendar and add events to it. Do those classes still go in my views file or do I have to make a separate directory and import it to my views file and make then make an instance of it?
Most people put view classes into the views.py file.
If it becomes unmanageable or you need further organization, split them as necessary.
You don't need to import it into views.py - they are ultimately only used by the urls.py which don't require views live in any particular place.
If you're talking about application logic, if it's a huge class that's not necessarily related to the web HTTP response cycle, then it makes sense to place it in another file.

Django design patterns for overriding models

I'm working on an e-commerce framework for Django. The chief design goal is to provide the bare minimum functionality in terms of models and view, instead allowing the users of the library to extend or replace the components with their own.
The reasoning for this is that trying to develop a one-size-fits-all solution to e-commerce leads to overcomplicated code which is often far from optimal.
One approach to tackling this seems to be using inversion-of-control, either through Django's settings file or import hacks, but I've come up against a bit of a problem due to how Django registers its models.
The e-commerce framework provides a bunch of abstract models, as well as concrete versions in {app_label}/models.py. Views make use of Django's get_model(app_label,model) function to return the model class without having to hard-code the reference.
This approach has some problems:
Users have to mimic the structure of the framework's apps, ie the app_label and effectively replace our version of the app with their own
Because of the way the admin site works by looking for admin.py in each installed app, they have to mimic or explicitly import the framework's admin classes in order to use them. But by importing them, the register method gets called so they have to be unregistered if a user wants to customise them.
The user has to be extremely careful about how they import concrete models from the core framework. This is because Django's base model metaclass automatically registers a model with the app cache as soon as the class definition is read (ie upon __new__), and the first model registered with a specific label is the one you're stuck with. So you have to define all your override models BEFORE you import any of the core models. This means you end up with messy situations of having a bunch of imports at the bottom of your modules rather than the top.
My thinking is to go further down the inversion-of-control rabbit hole:
All references to core components (models, views, admin, etc) replaced with calls to an IoC container
For all the core (e-commerce framework) models, replace the use of Django's base model metaclass with one that doesn't automatically register the models, then have the container explicitly register them on startup.
My question:
Is there a better way to solve this problem? The goal is to make it easy to customise the framework and override functionality without having to learn lots of annoying tricks. The key seems to be with models and the admin site.
I appreciate that using an IoC container isn't a common pattern in the Django world, so I want to avoid it if possible, but it is seeming like the right solution.
Did you look at the code from other projects with a similar approach?
Not sure if this way covers your needs, but imo the code of django-shop is worth to look at.
This framework provides the basic logic, allowing you to provide custom logic where needed.
customize via models
eg see the productmodel.py
#==============================================================================
# Extensibility
#==============================================================================
PRODUCT_MODEL = getattr(settings, 'SHOP_PRODUCT_MODEL',
'shop.models.defaults.product.Product')
Product = load_class(PRODUCT_MODEL, 'SHOP_PRODUCT_MODEL')
customize via logic/urls
eg see the shop's simplevariation-plugin
It extends the cart-logic, so it hooks in via urlpattern:
(r'^shop/cart/', include(simplevariations_urls)),
(r'^shop/', include(shop_urls)),
and extends the views:
...
from shop.views.cart import CartDetails
class SimplevariationCartDetails(CartDetails):
"""Cart view that answers GET and POSTS request."""
...
The framework provides several points to hook-in, the simplevariation-plugin mentionned above additionally provides a cart-modifier:
SHOP_CART_MODIFIERS = [
...
'shop_simplevariations.cart_modifier.ProductOptionsModifier',
...
]
I worry that this explanation is not very understandable, it is difficult to briefly summarize this concept. But take a look at the django-shop project and some of its extensions: ecosystem

How should multiple Django apps communicate with each other?

Other posters have previously said in this forum that when your Django app starts getting big and unmanageable, you should split it up into several apps. I'm at that point now. What are the best practices for allowing communication between these apps?
One of my apps (let's call it Processor) processes very large data sets. Once an hour it produces a small amount of new data for the other app. This other app (let's call it Presenter) displays the data to users.
How should Processor hand new data to Presenter? Should it simply import parts of Presenter's model, so it can create and save records in Presenter's database? That seems like tight coupling to me. Or should it pass the data by calling a function in Presenter? Or put the data in some sort of data store that both Processor and Presenter know about?
How do you all usually solve this problem?
/Martin
I would definitely go for the importing Processor's models in the Presenter app. That's how, for instance, you can add extra user info: you have a UserPreferences model with a ForeignKeyField to django.contrib.auth.models.User. Your might have less of a bad feeling doing that that between your two apps because django.contrib is the "standard library", but nevertheless, it is direct coupling.
If your applications are coupled, then your code should be coupled to reflect this. This follows the idea that explicit is better than implicit, right?
If, however, your're designing something a tad more generic (i.e. you're going to use multiple Presenter app instances for Processors of different kinds), you can store the specific models as a setting:
import processor_x.models
PRESENTER_PROCESSOR_MODELS = presenter_x.models
Then, in your Presenter models:
from django.conf import settings
class Presenter:
processor = models.ForeignKey(settings.PRESENTER_PROCESSOR_MODELS)
Caveat: I have never attempted this, but I don't recall a limitation on settings to be only strings, tuples or lists!

Django: How to dynamically add tag field to third party apps without touching app's source code

Scenario: large project with many third party apps. Want to add tagging to those apps without having to modify the apps' source.
My first thought was to first specify a list of models in settings.py (like ['appname.modelname',], and call django-tagging's register function on each of them. The register function adds a TagField and a custom manager to the specified model. The problem with that approach is that the function needs to run BEFORE the DB schema is generated.
I tried running the register function directly in settings.py, but I need django.db.models.get_model to get the actual model reference from only a string, and I can't seem to import that from settings.py - no matter what I try I get an ImportError. The tagging.register function imports OK however.
So I changed tactics and wrote a custom management command in an otherwise empty app. The problem there is that the only signal which hooks into syncdb is post_syncdb which is useless to me since it fires after the DB schema has been generated.
The only other approach I can think of at the moment is to generate and run a 'south' like database schema migration. This seems more like a hack than a solution.
This seems like it should be a pretty common need, but I haven't been able to find a clean solution.
So my question is: Is it possible to dynamically add fields to a model BEFORE the schema is generated, but more specifically, is it possible to add tagging to a third party model without editing it's source.
To clarify, I know it is possible to create and store Tags without having a TagField on the model, but there is a major flaw in that approach in that it is difficult to simultaneously create and tag a new model.
From the docs:
You don't have to register your models
in order to use them with the tagging
application - many of the features
added by registration are just
convenience wrappers around the
tagging API provided by the Tag and
TaggedItem models and their managers,
as documented further below.
Take a look at the API documentation and the examples that follow for how you can add tags to any arbitrary object in the system.
http://api.rst2a.com/1.0/rst2/html?uri=http://django-tagging.googlecode.com/svn/trunk/docs/overview.txt#tags
Updated
#views.py
def tag_model_view(request, model_id):
instance_to_tag = SomeModel.objects.get(pk=model_id)
setattr(instance_to_tag, 'tags_for_instance', request.POST['tags'])
...
instance_to_tag.save()
...returns response
#models.py
#this is the post_save signal receiver
def tagging_post_save_handler(sender, instance, created):
if hasattr(instance, 'tags_for_instance'):
Tag.objects.update_tags(instance, instance.tags_for_instance)