Django provides different ways to change settings(documentations) in a test in different levels (TestCase class, test method, context manager). I understand the difference between override_settings and modify_settings, but I can't get the difference between SimpleTestCase.settings() and django.test.override_settings() when being used as a context manager. Is there any difference in functionality or preference in which one to use?
I guess, settings() and override_settings() can effectively used as context but, the documentation says settings() has to be used as context, and override_settings() has to be used as decorator.
But, it is quite possible that in the future, the decorator can only be used as a decorator. It is best to follow the documentation and not attempt to misuse the tools provided by Django. Django developers may have made this change to allow a transition from previous usage and the behavior is the same only for a few versions.
Related
I see I can easily modify the Meta options of a Serializer at run time (i'm not even sure this is the right way to call it, I read around somebody call it monkey patching, even though i don't like it):
NodeDetailSerializer.Meta.fields.append('somefield')
What if I need to do something like:
NodeDetailSerializer.contact = serializers.HyperlinkedIdentityField(view_name='api_node_contact', slug_field='slug')
NodeDetailSerializer.Meta.fields.append('contact')
Why would I need to do that?
I'm trying to build a modular application, I have some optional apps that can be added in an they automatically add some features to the core ones.
I would like to keep the code of the two apps separate, also because the additional applications might be moved in a different repository.
Writing modular and extensible apps is really a tricky business.
Would like to know more about that if anybody has some useful resources to share.
Federico
I found a solution for my problem.
My problem was: I needed to be able to add hyperlinks to other resources without editing the code of a core app. I needed to do it from the code of the additional module.
I wrote this serializer mixin: https://gist.github.com/nemesisdesign/8132696
Which can be used this way:
from myapp.serializers import MyExtensibleSerializer
MyExtensibleSerializer.add_relationship(**{
'name': 'key_name',
'view_name': 'view_name_in_urls_py',
'lookup_field': 'arg_passed_to_to_view_name'
})
I am working on a django website/project, it has already been internationalised/localised to us-english, gb-english and mandarin.
It is deployed with same codebase except for the settings config which states what lang to use. Some deployments are mandarin only, others are us-english.
The client now has a requirement to change some of the language used within a gb-english version for a specific deployment. My main goal is not to duplicate things and I think I can get what I need out of django 18n.
Basically, I am looking to find if i can or should use django i18n to handle:
'Welcome' on deployA
'Oh hai' on deployB,
even though they're still both gb-english based sites, I feel I should be able to say that deployA will use 'en_GB' and deployB would use 'en_GB_special'.
I suppose it's the fact that I want to use a non-standard i18n name/code that is making me wonder if I should do this, or if I am approaching this in the wrong manner.
I would only create a new language if you're intending to maintain two translations. If the new site will need to stay in sync with en_GB and/or you intend to use the customization in another language then I think you'd be better off creating new messages, adding a string for them to en_GB and add a flag to your application to switch the feature for your feline client.
I know that in django integration, it is easy to test if a page would load successfully by making sure status code is 200. However, the project I am working on have pages that might partially load (certain sections of the page will silently fail to load). What is the best way to catch this situation? Is there a way to insert such error into the http response?
I know I can potentially do regex on the text on the page to check for things that might not load or I can probably check that name of certain css class exist. But that does not seem to be too robust an approach.
This will greatly depend on your implementation details, but there are two suitable approaches to testing templates that may help you:
If the partial page loading can be tested/triggered by using nothing more than template syntax, create test templates that conditionally print some text you can match against in the response, such as WORKED or FOO.
If it's something that largely depends upon the context the template receives, then one-off test views, which you define alongside your test case and call directly by passing in a mocked request, work as well. In this case, you'll likely rely on the test view to raise exceptions if the page rendering won't proceed as expected, otherwise everything went well.
Alternatively, you can even mix the two. In this case, you'll rely on the view to generate a HTTP response which you'll then check for some test text.
If that doesn't work, you can resort to overriding the templates. The general problem is that you can't rely on matching against text because it's global. The template can change and potentially cause your tests to misfire. What you can then do is have specific test settings that add additional directories for template discovery where you can provide different template implementations which contain text that does not change which would be suitable and safe for matching against in the test. The difficulty with this approach is that it does not neatly document itself, as opposed to the previous two approaches.
In our company we make news portals for a pretty big number of local newspapers (currently 13, going to 30 next month and more in the future), each with 2k to 100k page views/day. Since we are evolving from a situation where each site was heavily customized to one where each difference is a matter of configuration or custom template, our software is already pretty much the same for all sites. Right now our deployment strategy is one gunicorn instance for each site (with 1-17 workers each, depending on the site traffic), on a 16-core server and 12GB RAM. The problem with this setup is that each worker (regular pre-forked gunicorn) takes 110MB, whether its being used or not. Now with the new sites we would need to add more RAM to serve not that much many requests, so basically it doesn't scale. Also, since we are moving from this model where each site is independent, each site has its own database and I quite like it that way, especially since we are using relational databases (mysql, but migrating to pgsql), so its much easier to shard this way.
I'm doing some research and experimenting with running all sites on one gunicorn instance, so I could use the servers fully and add more servers behind a load balancer when it came to it. The problem is that django assumes in a lot of places that only one site is running per process, so for what I've thought of so far I'd have to implement:
A middleware that takes the HTTP_HOST from the request and places an identifier on a threadlocal variable.
A template loader that uses that variable to load custom templates accordingly.
Monkey patch django.db.model.Model, probably adding a metaclass (not even sure that's possible, but I think I would need it because of the custom managers we sometimes need to use) that would overwrite the managers for one that would first call db_manager(identifier) on the original manager and then call the intended method. I would also need to overwrite the save and delete methods to always include the using=identifier parameter.
I guess I would need to stop using inclusion_tag decorators, not a big problem, but I need to think of other cases like this.
Heavy and ugly patching of urlresolvers if I need custom or extra urls for each site. I don't need them now, but probably will at some point.
And this is just is what I came up with without even implementing it and seeing where it breaks, I'm sure I'd need many more changes for it to work. So I really don't want to do it, especially with the extra maintenance effort I'll need, but I don't see any alternatives and would love to learn that someone already solved this in a better way. Of course I could also stop using django altogether (I already have many reasons to do so) but that would mean a major rewrite and having two maintain two incompatible branches of the software until the new one reached feature parity with the django version, so to me it seems even worse than all the ugly hacks.
I've recently developed an e-commerce system with similar requirements -- many instances running from the same project sharing almost everything. The previous version of the system was a bunch of independent installations (~30) so it was pretty unmaintainable. I'm sure the requirements still differ from yours (for example, all instances shared the same models in my case), but it still might be useful to share my experience.
You are right that Django doesn't help with scenarios like this out of the box, but it's actually surprisingly easy to work it around. Here is a brief description of what I did.
I could see a synergy between what I wanted to achieve and django.contrib.sites. Also because many third-party Django apps out there know how to work with it and use it, for example, to generate absolute URLs to the current site. The major problem with sites is that it wants you to specify the current site id in settings.SITE_ID, which a very naive approach to the multi host problem. What one naturally wants, and what you also mention, is to determine the current site from the Host request header. To fix this problem, I borrowed the hook idea from django-multisite: https://github.com/shestera/django-multisite/blob/master/multisite/threadlocals.py#L19
Next I created an app encapsulating all the functionality related to the multi host aspect of my project. In my case the app was called stores and among other things it featured two important classes: stores.middleware.StoreMiddleware and stores.models.Store.
The model class is a subclass of django.contrib.sites.models.Site. The good thing about subclassing Site is that you can pass a Store to any function where a Site is expected. So you are effectively still just using the old, well documented and tested sites framework. To the Store class I added all the fields needed to configure all the different stores. So it's got fields like urlconf, theme, robots_txt and whatnot.
The middleware class' function was to match the Host header with the corresponding Store instance in the database. Once the matching Store was retrieved, It would patch the SITE_ID in a way similar to https://github.com/shestera/django-multisite/blob/master/multisite/middleware.py. Also, it looked at the store's urlconf and if it was not None, it would set request.urlconf to apply its special URL requirements. After that, the current Store instance was stored in request.store. This has proven to be incredibly useful, because I was able to do things like this in my views:
def homepage(request):
featured = Product.objects.filter(featured=True, store=request.store)
...
request.store became a natural additional dimension of the request object throughout the project for me.
Another thing that was defined on the Store class was a function get_absolute_url whose implementation looked roughly like this:
def get_absolute_url(self, to='/'):
"""
Return an absolute url to this `Store` or to `to` on this store.
The URL includes http:// and the domain name of the store.
`to` can be an object with `get_absolute_url()` or an absolute path as string.
"""
if isinstance(to, basestring):
path = to
elif hasattr(to, 'get_absolute_url'):
path = to.get_absolute_url()
else:
raise ValueError(
'Invalid argument (need a string or an object with get_absolute_url): %s' % to
)
url = 'http://%s%s%s' % (
self.domain,
# This setting allowed for a sane development environment
# where I just set it to ".dev:8000" and configured `dnsmasq`.
# The same value was also removed from the `Host` value in the middleware
# before looking up the `Store` in database.
settings.DOMAIN_SUFFIX,
path
)
return url
So I could easily generate URLs to objects on other than the current store, e.g.:
# Redirect to `product` on `store`.
redirect(store.get_absolute_url(product))
This was basically all I needed to be able to implement a system allowing users to create a new e-shop living on its own domain via the Django admin.
Let's say that I have a system that has some pages that are public (both non-authenticated users and logged-in users can view) and others which only logged-in users can view.
I want the template to show slightly different content for each of these two classes of pages. The #login_required view decorator is always used on views which only logged-in users can view. However, my template would need to know whether this decorator is used on the view from which the template was invoked from.
Please keep in mind that I do not care whether the user is logged in or not for the public pages. What I care about is whether a page can be viewed by the general public, and the absence of a #login_required decorator will tell me that.
Can anyone throw me a hint on how the template would know whether a particular decorator is being used on the view from which the template invoked from?
Yes, it is possible, but not terribly straightforward. The complicating factor is that Django's login_required decorator actually passes through 2 levels of indirection (one dynamic function and one other decorator), to end up at django.contrib.auth.decorators._CheckLogin, which is a class with a __call__ method.
Let's say you have a non-django, garden-variety decorated function that looks like this:
def my_decorator(func):
def inner():
return func()
return inner
#my_decorator
def foo():
print foo.func_name
# results in: inner
Checking to see if the function foo has been wrapped can be as simple as checking the function object's name. You can do this inside the function. The name will actually be the name of the last wrapper function. For more complicated cases, you can use the inspect module to walk up the outer frames from the current frame if you're looking for something in particular.
In the case of Django, however, the fact that the decorator is actually an instance of the _CheckLogin class means that the function is not really a function, and therefore has no func_name property: trying the above code will raise an Exception.
Looking at the source code for django.contrib.auth.decorators._CheckLogin, however, shows that the _CheckLogin instance will have a login_url property. This is a pretty straightforward thing to test for:
#login_required
def my_view(request):
is_private = hasattr(my_view, 'login_url')
Because _CheckLogin is also used to implement the other auth decorators, this approach will also work for permission_required, etc. I've never actually had a need to use this, however, so I really can't comment on what you should look for if you have multiple decorators around a single view... an exercise left to the reader, I guess (inspect the frame stack?).
As unrequested editorial advice, however, I would say checking the function itself to see if it was wrapped like this strikes me as a bit fiddly. You can probably imagine all sorts of unpredictable behaviour waiting to happen when a new developer comes to the project as slaps on some other decorator. In fact, you're also exposed to changes in the django framework itself... a security risk waiting to happen.
I would recommend Van Gale's approach for that reason as something that is explicit, and therefore a much more robust implementation.
I would pass an extra context variable into the template.
So, the view that has #login_required would pass a variable like private: True and the other views would pass private: False
Why does your template need to know this? If the #login_required decorator is used, the view itself prevents people who aren't logged in from ever reaching the page and therefore never seeing the template to begin with.
Templates are hierarchical so why not have a #login_required version and a "no #login_required" version, both of which inherit from the same parent?
This would keep the templates a lot cleaner and easier to maintain.