How to create a django package without setting DJANGO_SETTINGS_MODULE as environment variable? - django

I am creating a package that itself uses Django and I will be using it within other Django applications. The main issue I am facing is that I need to use to settings for various reasons such as logging and other extensive requirements. Since, this package does not have any views/urls, we are writing tests and using pytest to run them. The tests will not run without the settings configured. So initially I put the following snippet in the __init__ file in the root app.
import os
import django
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_package.settings")
django.setup()
Now, the test ran properly and the package as standalone app was working. But the moment I installed it in the main project, it overrides the enviroment variable with it's own settings and you can imagine the kind of havoc it would ensue.
This is the first time I am packaging a django app. So I am not well-versed with best practices and the docs are a little convoluted. I read the structure and code of various packages that use settings in their package but I am still not able to understand how to ensure the package accesses the intended settings and the project's settings is not affected at the same time.
While going throught the docs, I came accross this alternative to setting DJANGO_SETTINGS_MODULE, like this:
from django.conf import settings
settings.configure(DEBUG=True)
As shown here: https://docs.djangoproject.com/en/2.2/topics/settings/#using-settings-without-setting-django-settings-module
But where exactly am I supposed to add this? To every file where the settings are imported or will it work in the __init__ (Tried this but something isn't right, shows Apps aren't loaded )
I tried this as well where I imported my settings as defaults and called configure using them as defaults and called django.setup() as well but didn't do the trick:
# my_package/__init__.py
from django.conf import settings
from my_package import settings as default
if not settings.configured:
settings.configure(default_settings=default, DEBUG=True)
import django
django.setup()
Also, I need settings mainly because I have few parameters that can be overridden in the project that is using the package. When the package is installed, the overridden variables is what I should be able to access in the package during runtime.
If someone can guide on how to tackle this or have a better process of creating packages that need django settings, please do share.

So I ended up finding a way to work without setting the settings module as an environement variable. This enables me to use the specified settings by importing all the overridden settings as well as the default settings from:
Create a apps file for configuring your package as an app.
# my_package/apps.py
from django.apps import AppConfig
class MyPackageConfig(AppConfig):
name = 'my_package'
verbose_name = 'My package'
And, in your package's root. The following snippet in your __init__.py will only set the overridden settings:
# my_package/__init__.py
from django.conf import settings
import django
from my_package import settings as overridden_settings
from django.conf import settings
default_app_config = 'my_package.apps.MyPackageConfig'
if not settings.configured:
# Get the list of attributes the module has
attributes = dir(overridden_settings)
conf = {}
for attribute in attributes:
# If the attribute is upper-cased i.e. a settings variable, then copy it into conf
if attribute.isupper():
conf[attribute] = getattr(overridden_settings, attribute)
# Configure settings using the settings
settings.configure(**conf)
# This is needed since it is a standalone django package
django.setup()
Reference for what django.setup() will do:
https://docs.djangoproject.com/en/2.2/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage
Points to keep in mind:
Since it is in the __init__, this will make sure if you import something from the package, the settings are configured.
As mentioned in the documentation above, you have to make sure that the settings is configured only once and similarly the setup method is called once or it will raise an Exception.
Let me know if this helps or you are able to come up with a better solution to this.

Related

What's the best way to make a new migration in a standalone Django app?

I have a Django app which was spun out a while ago from the project it was originally built for, made into a separate standalone app, and put on PyPI (https://pypi.python.org/pypi/mysociety-django-images). The app is built expecting to be run with Django 1.7+.
I'd now like to make a change to one of the models there - just changing a max_length on a field. I can't see anything in the documentation about how to make a new migration for it? Do I need to make an example project and use that, or is there a better way?
You can do this by making a script like:
#!/path/to/your python
import sys
import django
from django.conf import settings
from django.core.management import call_command
settings.configure(
DEBUG=True,
INSTALLED_APPS=(
'django.contrib.contenttypes',
'yourAppName',
),
)
django.setup()
call_command('makemigrations', 'yourAppName')
(this is also how we go about testing our standalone apps).
I don't know which is the better practice between this and create an example project (this appears lighter to do).

Best practices to integrate a django app with other django apps

In Django:
a) What is the best way to test that another app is installed? (By installed I mean to be in INSTALLED_APPS)
b) What is the recommended way to alter the behaviour of the current app accordingly. I understand that:
if "app_to_test" in settings.INSTALLED_APPS:
# Do something given app_to_test is installed
else:
# Do something given app_to_test is NOT installed
is possible, but is there another way? is this the recommended way?
c) What is the recommended practice to import modules that are only required if another app is installed? import then inside of the if block that test for the installed app?
I tend to favour checking INSTALLED_APPS as you have listed in your question.
if DEBUG and 'debug_toolbar' not in INSTALLED_APPS:
INSTALLED_APPS.append('debug_toolbar')
INTERNAL_IPS = ('127.0.0.1',)
This works well when you have settings distributed across different settings files that don't necessarily have knowledge of the other. eg I might have a shared_settings.py which contains a base set of INSTALLED_APPS, then a debug_settings.py which imports shared_settings.py and then adds any additional apps as required.
The same applies for non-settings. For example, if you have Django South installed and want to create introspection rules for South only if it's installed, I would do this:
if 'south' in settings.INSTALLED_APPS:
from south.modelsinspector import add_introspection_rules
# Let South introspect custom fields for migration rules.
add_introspection_rules([], [r"^myapp\.models\.fields\.SomeCustomField"])
As I see it, there's no need to try and import a module if you know there's the possibility that it may not be installed. If the user has listed the module in INSTALLED_APPS then it is expected to be importable.
That?
try:
# Test it
from an_app import something
except ImportError as e:
from another_app import something
#Do something else

django automate syncdb

I'm working on a project that includes a django server, and also a setup module.
The user will be configuring their system to run my program, which includes a django webserver element along with other items. I'm working on a setup module that assists the user in getting all of the settings correct and sets up all of the appropriate files. One of the things that I'd like to be during the setup process is essentially a "manage.py syncdb" command that creates an appropriate SQLite file and table from nothing.
I could grab the code found in manage.py and directly stick it into my setup module appropriately, but I'm not sure if there's a better approach that I'm missing - along the lines of two lines consisting of:
import django.something
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
django.something.syncdb()
Or something of the sort. Am I just missing something here?
This should do it:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
from django.core import management
management.call_command('syncdb', interactive=False)
You can also do
import os
import settings
from django.core.management.commands import syncdb
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
syncdb.Command().execute(noinput=True)

Django South load fixtures based on environment (development, integration, production)

I'm working on a project that is using Django and South for migrations. I would like to set up some fixtures that would be used to populate the database in some environments (development, demo) but not in others (production). For example, I would like there to be some data in the system so the UI developer has something to work with in the interface they are working on or so we can quickly do a demo for a project manager without having to manually set things up via the admin interface.
While I have found plenty of ways to separate automated testing fixtures from regular fixtures, I have not been able to find anything about loading fixtures based on environment. Is this possible, or is there another way people solve this problem I'm overlooking?
There's not much you can do about initial_data fixtures. However, I've always felt those has less than optimal utility anyways. Rarely do you want the same fixture applied again and again with every call to syncdb or migrate.
If you're using some differently named fixture, you can easily cause it to run with your migration by adding the following to your forwards migration (from the South docs)
from django.core.management import call_command
call_command("loaddata", "my_fixture.json")
So really, all you need is some way to only do that in certain environments. For dev, the easiest path would be to simply rely on DEBUG. So, the previous code becomes:
from django.conf import settings
from django.core.management import call_command
if settings.DEBUG:
call_command("loaddata", "dev_fixture.json")
If you need greater control, you can create some sort of setting that will be different in each local_settings.py (or whatever methodology you use to customize settings based on environment). For example:
# local_settings.py
ENV = 'staging'
# migration
from django.conf import settings
from django.core.management import call_command
if settings.ENV == 'staging':
call_command("loaddata", "staging_fixture.json")

Using Constants in Settings.py

Can I use a variable declared in the Django project's settings.py in one of my module files?
For instance, using DATABASE_HOST = 'databasename'?
I'm trying to get the name of the server the application is currently deployed on you see.
You certainly can... it's encouraged, in fact. To use it, import the settings from django.conf (this imports your project's settings):
from django.conf import settings
print "My database host is %s" % settings.DATABASE_HOST
The documentation on Using settings in Python code explains why this works, and why this is preferable over importing the the settings.py module directly.
yes
from django.conf import settings
print settings.MY_SETTINGS_VAR