django context processor - django

I have a bunch of variables that need to be available to the view for all templates. It seems the best choice would be a context processor.
The documentation says:
A context processor has a very simple interface: It’s just a Python
function that takes one argument, an HttpRequest object, and returns a
dictionary that gets added to the template context. Each context
processor must return a dictionary.
If I need to do more advanced lookups, can I define other functions? Do the functions need to be in a class? I was thinking of creating a file named context_processors.py in my app folder.

You can define other functions, and the functions don't need to be in a class.
Typically people put their context processors into a context_processors.py like you're thinking of as functions, and then name them all in settings.TEMPLATE_CONTEXT_PROCESSORS.
For example, here's an app that has the context_processors.py inside it: django-seo.

Related

How to send Flask data to HTML without having to assign it to each individual render_template? [duplicate]

I have a method that returns data which is needed in my base template (content for a global footer).
How do either (1) pass a variable into the base template (which other templates extend) or (2) pass a variable to all templates globally without explicitly adding it in a call to render_template?
From flask docs: Flask's Context Processors
To inject new variables automatically into the context of a template,
context processors exist in Flask. Context processors run before the
template is rendered and have the ability to inject new values into
the template context. A context processor is a function that returns a
dictionary. The keys and values of this dictionary are then merged
with the template context, for all templates in the app:
Example from docs:
#app.context_processor
def inject_user():
return dict(user=g.user)
Note that this example uses the g variable, which is already accessible in templates.

Specify typing for Django field in model (for Pylint)

I have created custom Django model-field subclasses based on CharField but which use to_python() to ensure that the model objects returned have more complex objects (some are lists, some are dicts with a specific format, etc.) -- I'm using MySQL so some of the PostGreSql field types are not available.
All is working great, but Pylint believes that all values in these fields will be strings and thus I get a lot of "unsupported-membership-test" and "unsubscriptable-object" warnings on code that uses these models. I can disable these individually, but I would prefer to let Pylint know that these models return certain object types. Type hints are not helping, e.g.:
class MealPrefs(models.Model):
user = ...foreign key...
prefs: dict[str, list[str]] = \
custom_fields.DictOfListsExtendsCharField(
default={'breakfast': ['cereal', 'toast'],
'lunch': ['sandwich']},
)
I know that certain built-in Django fields return correct types for Pylint (CharField, IntegerField) and certain other extensions have figured out ways of specifying their type so Pylint is happy (MultiSelectField) but digging into their code, I can't figure out where the "magic" specifying the type returned would be.
(note: this question is not related to the INPUT:type of Django form fields)
Thanks!
I had a look at this out of curiosity, and I think most of the "magic" actually comes for pytest-django.
In the Django source code, e.g. for CharField, there is nothing that could really give a type hinter the notion that this is a string. And since the class inherits only from Field, which is also the parent of other non-string fields, the knowledge needs to be encoded elsewhere.
On the other hand, digging through the source code for pylint-django, though, I found where this most likely happens:
in pylint_django.transforms.fields, several fields are hardcoded in a similar fashion:
_STR_FIELDS = ('CharField', 'SlugField', 'URLField', 'TextField', 'EmailField',
'CommaSeparatedIntegerField', 'FilePathField', 'GenericIPAddressField',
'IPAddressField', 'RegexField', 'SlugField')
Further below, a suspiciously named function apply_type_shim, adds information to the class based on the type of field it is (either 'str', 'int', 'dict', 'list', etc.)
This additional information is passed to inference_tip, which according to the astroid docs, is used to add inference info (emphasis mine):
astroid can be used as more than an AST library, it also offers some
basic support of inference, it can infer what names might mean in a
given context, it can be used to solve attributes in a highly complex
class hierarchy, etc. We call this mechanism generally inference
throughout the project.
astroid is the underlying library used by Pylint to represent Python code, so I'm pretty sure that's how the information gets passed to Pylint. If you follow what happens when you import the plugin, you'll find this interesting bit in pylint_django/.plugin, where it actually imports the transforms, effectively adding the inference tip to the AST node.
I think if you want to achieve the same with your own classes, you could either:
Directly derive from another Django model class that already has the associated type you're looking for.
Create, and register an equivalent pylint plugin, that would also use Astroid to add information to the class so that Pylint know what to do with it.
I thought initially that you use a plugin pylint-django, but maybe you explicitly use prospector that automatically installs pylint-django if it finds Django.
The checker pylint neither its plugin doesn't check the code by use information from Python type annotations (PEP 484). It can parse a code with annotations without understanding them and e.g. not to warn about "unused-import" if a name is used in annotations only. The message unsupported-membership-test is reported in a line with expression something in object_A simply if the class A() doesn't have a method __contains__. Similarly the message unsubscriptable-object is related to method __getitem__.
You can patch pylint-django for your custom fields this way:
Add a function:
def my_apply_type_shim(cls, _context=None): # noqa
if cls.name == 'MyListField':
base_nodes = scoped_nodes.builtin_lookup('list')
elif cls.name == 'MyDictField':
base_nodes = scoped_nodes.builtin_lookup('dict')
else:
return apply_type_shim(cls, _context)
base_nodes = [n for n in base_nodes[1] if not isinstance(n, nodes.ImportFrom)]
return iter([cls] + base_nodes)
into pylint_django/transforms/fields.py
and also replace apply_type_shim by my_apply_type_shim in the same file at this line:
def add_transforms(manager):
manager.register_transform(nodes.ClassDef, inference_tip(my_apply_type_shim), is_model_or_form_field)
This adds base classes list or dict respectively, with their magic methods explained above, to your custom field classes if they are used in a Model or FormView.
Notes:
I thought also about a plugin stub solution that does the same, but the alternative with "prospector" seems so complicated for SO that I prefer to simply patch the source after installation.
Classes Model or FormView are the only classes created by metaclasses, used in Django. It is a great idea to emulate a metaclass by a plugin code and to control the analysis simple attributes. If I remember, MyPy, referenced in some comment here, has also a plugin mypy-django for Django, but only for FormView, because writing annotations for django.db is more complicated than to work with attributes. - I was trying to work on it for one week.

How to store a dynamic site-wide variable

I have an html file which is the base,where other html documents extends.Its a static page but i want to have variable in the menu.I don't think it's wise to create a view for it,since i don't intend to let users visit the base alone.So where in my project can I store site-wide dynamic variables that can be called on any page without explicitly stating them in their views.
Thank you in advance.
For user specific variables, use session.
For global constants (not variables!), use settings.py.
For global variables, consider to store it in database so it can be multithreading & multiprocess safe.
I looked around and saw different approaches,but one that doesn't compromise the DRY philosophy the most for me is registering a tag in your project then input it in the base template.Its neater See here https://stackoverflow.com/a/21062774/6629594 for an example
Storage can take any number of places, I put mine in a stats model in the db so you get all the goodness of that (and make it easy to access in views).
I then have a context processor written as so:
#context_processors.py:
def my_custom_context_processor(request):
return {'custom_context_variable1':'foo','custom_context_variable2':'bar'}
Add this to your context processors in settings.py:
TEMPLATE_CONTEXT_PROCESSORS = (
...
"my_app.context_processors.ny_custom_context_processor",
)
Provided you use render() to render your templates you can then you can just use:
{{ custom_context_variable1 }}
to return 'foo' in your template. Obviously returning strings is for example only, you can use anything you like so long as your context processor returns a dict.
you can also try using php pages.
Then acces the variable on each page with an include 'file containing the var.php' on every page.
None of this will be visible in the source html as it is only processed on the server side.
If you you would like to try this, mail me and I will send you some sample code.

Get data from request in template django

I wonder is it a bad idea to get data from request session or is it better to parse the data into dict context and render it (Need to do it for each view)?
you can add this to your TEMPLATE_CONTEXT_PROCESSORS if you are accessing the request object in templates frequently (like I do for URL get parameter processing).
"django.core.context_processors.request",
It's common practice to send what you need through the context in the view.
I feel like it gives you a little more security/certainty in what you're doing because you can keep your logic in the view where it should be rather than doing any checks in the template for things being in the request.
edit
The above is only true if you're looking to do something rarely. If you're regularly adding an element of the request to your templates you should indeed, as everybody else suggests, be writing context processors to make what you require available to all views.
Take a look at the docs; TEMPLATE_CONTEXT_PROCESSORS
Also give this chapter of the django book a read as it'll be very helpful; Chapter 9: Advanced Templates
Specifically this section;
Guidelines for Writing Your Own Context Processors
Here are a few tips for rolling your own:
Make each context processor responsible for the smallest subset of functionality possible. It’s easy to use multiple processors, so you might as well split functionality into logical pieces for future reuse.
Keep in mind that any context processor in TEMPLATE_CONTEXT_PROCESSORS will be available in every template powered by that settings file, so try to pick variable names that are unlikely to conflict with variable names your templates might be using independently. As variable names are case-sensitive, it’s not a bad idea to use all caps for variables that a processor provides.
It doesn’t matter where on the filesystem they live, as long as they’re on your Python path so you can point to them from the TEMPLATE_CONTEXT_PROCESSORS setting. With that said, the convention is to save them in a file called context_processors.py within your app or project.
Django gives you a way to put data into every template, it is called context processors.
http://www.b-list.org/weblog/2006/jun/14/django-tips-template-context-processors/
https://docs.djangoproject.com/en/1.7/ref/templates/api/

Can a django template know whether the view it is invoked from has the #login_required decorator?

Let's say that I have a system that has some pages that are public (both non-authenticated users and logged-in users can view) and others which only logged-in users can view.
I want the template to show slightly different content for each of these two classes of pages. The #login_required view decorator is always used on views which only logged-in users can view. However, my template would need to know whether this decorator is used on the view from which the template was invoked from.
Please keep in mind that I do not care whether the user is logged in or not for the public pages. What I care about is whether a page can be viewed by the general public, and the absence of a #login_required decorator will tell me that.
Can anyone throw me a hint on how the template would know whether a particular decorator is being used on the view from which the template invoked from?
Yes, it is possible, but not terribly straightforward. The complicating factor is that Django's login_required decorator actually passes through 2 levels of indirection (one dynamic function and one other decorator), to end up at django.contrib.auth.decorators._CheckLogin, which is a class with a __call__ method.
Let's say you have a non-django, garden-variety decorated function that looks like this:
def my_decorator(func):
def inner():
return func()
return inner
#my_decorator
def foo():
print foo.func_name
# results in: inner
Checking to see if the function foo has been wrapped can be as simple as checking the function object's name. You can do this inside the function. The name will actually be the name of the last wrapper function. For more complicated cases, you can use the inspect module to walk up the outer frames from the current frame if you're looking for something in particular.
In the case of Django, however, the fact that the decorator is actually an instance of the _CheckLogin class means that the function is not really a function, and therefore has no func_name property: trying the above code will raise an Exception.
Looking at the source code for django.contrib.auth.decorators._CheckLogin, however, shows that the _CheckLogin instance will have a login_url property. This is a pretty straightforward thing to test for:
#login_required
def my_view(request):
is_private = hasattr(my_view, 'login_url')
Because _CheckLogin is also used to implement the other auth decorators, this approach will also work for permission_required, etc. I've never actually had a need to use this, however, so I really can't comment on what you should look for if you have multiple decorators around a single view... an exercise left to the reader, I guess (inspect the frame stack?).
As unrequested editorial advice, however, I would say checking the function itself to see if it was wrapped like this strikes me as a bit fiddly. You can probably imagine all sorts of unpredictable behaviour waiting to happen when a new developer comes to the project as slaps on some other decorator. In fact, you're also exposed to changes in the django framework itself... a security risk waiting to happen.
I would recommend Van Gale's approach for that reason as something that is explicit, and therefore a much more robust implementation.
I would pass an extra context variable into the template.
So, the view that has #login_required would pass a variable like private: True and the other views would pass private: False
Why does your template need to know this? If the #login_required decorator is used, the view itself prevents people who aren't logged in from ever reaching the page and therefore never seeing the template to begin with.
Templates are hierarchical so why not have a #login_required version and a "no #login_required" version, both of which inherit from the same parent?
This would keep the templates a lot cleaner and easier to maintain.