Django South load fixtures based on environment (development, integration, production) - django

I'm working on a project that is using Django and South for migrations. I would like to set up some fixtures that would be used to populate the database in some environments (development, demo) but not in others (production). For example, I would like there to be some data in the system so the UI developer has something to work with in the interface they are working on or so we can quickly do a demo for a project manager without having to manually set things up via the admin interface.
While I have found plenty of ways to separate automated testing fixtures from regular fixtures, I have not been able to find anything about loading fixtures based on environment. Is this possible, or is there another way people solve this problem I'm overlooking?

There's not much you can do about initial_data fixtures. However, I've always felt those has less than optimal utility anyways. Rarely do you want the same fixture applied again and again with every call to syncdb or migrate.
If you're using some differently named fixture, you can easily cause it to run with your migration by adding the following to your forwards migration (from the South docs)
from django.core.management import call_command
call_command("loaddata", "my_fixture.json")
So really, all you need is some way to only do that in certain environments. For dev, the easiest path would be to simply rely on DEBUG. So, the previous code becomes:
from django.conf import settings
from django.core.management import call_command
if settings.DEBUG:
call_command("loaddata", "dev_fixture.json")
If you need greater control, you can create some sort of setting that will be different in each local_settings.py (or whatever methodology you use to customize settings based on environment). For example:
# local_settings.py
ENV = 'staging'
# migration
from django.conf import settings
from django.core.management import call_command
if settings.ENV == 'staging':
call_command("loaddata", "staging_fixture.json")

Related

How to create a django package without setting DJANGO_SETTINGS_MODULE as environment variable?

I am creating a package that itself uses Django and I will be using it within other Django applications. The main issue I am facing is that I need to use to settings for various reasons such as logging and other extensive requirements. Since, this package does not have any views/urls, we are writing tests and using pytest to run them. The tests will not run without the settings configured. So initially I put the following snippet in the __init__ file in the root app.
import os
import django
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_package.settings")
django.setup()
Now, the test ran properly and the package as standalone app was working. But the moment I installed it in the main project, it overrides the enviroment variable with it's own settings and you can imagine the kind of havoc it would ensue.
This is the first time I am packaging a django app. So I am not well-versed with best practices and the docs are a little convoluted. I read the structure and code of various packages that use settings in their package but I am still not able to understand how to ensure the package accesses the intended settings and the project's settings is not affected at the same time.
While going throught the docs, I came accross this alternative to setting DJANGO_SETTINGS_MODULE, like this:
from django.conf import settings
settings.configure(DEBUG=True)
As shown here: https://docs.djangoproject.com/en/2.2/topics/settings/#using-settings-without-setting-django-settings-module
But where exactly am I supposed to add this? To every file where the settings are imported or will it work in the __init__ (Tried this but something isn't right, shows Apps aren't loaded )
I tried this as well where I imported my settings as defaults and called configure using them as defaults and called django.setup() as well but didn't do the trick:
# my_package/__init__.py
from django.conf import settings
from my_package import settings as default
if not settings.configured:
settings.configure(default_settings=default, DEBUG=True)
import django
django.setup()
Also, I need settings mainly because I have few parameters that can be overridden in the project that is using the package. When the package is installed, the overridden variables is what I should be able to access in the package during runtime.
If someone can guide on how to tackle this or have a better process of creating packages that need django settings, please do share.
So I ended up finding a way to work without setting the settings module as an environement variable. This enables me to use the specified settings by importing all the overridden settings as well as the default settings from:
Create a apps file for configuring your package as an app.
# my_package/apps.py
from django.apps import AppConfig
class MyPackageConfig(AppConfig):
name = 'my_package'
verbose_name = 'My package'
And, in your package's root. The following snippet in your __init__.py will only set the overridden settings:
# my_package/__init__.py
from django.conf import settings
import django
from my_package import settings as overridden_settings
from django.conf import settings
default_app_config = 'my_package.apps.MyPackageConfig'
if not settings.configured:
# Get the list of attributes the module has
attributes = dir(overridden_settings)
conf = {}
for attribute in attributes:
# If the attribute is upper-cased i.e. a settings variable, then copy it into conf
if attribute.isupper():
conf[attribute] = getattr(overridden_settings, attribute)
# Configure settings using the settings
settings.configure(**conf)
# This is needed since it is a standalone django package
django.setup()
Reference for what django.setup() will do:
https://docs.djangoproject.com/en/2.2/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage
Points to keep in mind:
Since it is in the __init__, this will make sure if you import something from the package, the settings are configured.
As mentioned in the documentation above, you have to make sure that the settings is configured only once and similarly the setup method is called once or it will raise an Exception.
Let me know if this helps or you are able to come up with a better solution to this.

Django single-test migration

I have recently implemented a reusable app within the django project I am working on. For the sake of the question, let's call it reusable_app. This app also has some unittests that run, however, these tests depend on some basic models declared somewhere next to the tests in a model.py.
/resuable_app
__init__.py
models.py
views.py
urls.py
/tests
__init__.py
tests.py
/simple_app
__init__.py
models.py
Now, the models aren't loaded in the database unless I specify the folder in INSTALLED_APPS in the testing configuration file. I was wondering if there is another way to achieve this, not having to expose the app in the settings file? I seem to be able to specify the app via #override_settings, but the migrations are not ran.
Ex:
#override_settings(INSTALLED_APPS=['reusable_app'])
class TestReusableApp(TestCase):
def test_something(self):
...
If reusable_app is not specified in the settings module INSTALLED_APPS this still yields a ProgrammingError. Am I missing something or is there another approach?
I think the issue here is that the test runner is setting up the tables before you add the app with #override_settings.
Normally what I do with reusable apps is to run the tests in the context of an "example" app, with settings that include the app your want to test. Usually works pretty well, as I'm packaging the reusable app separately. Here's an example of this from a past project of mine.
However, if that's not possible, you might try to override setUp in your tests, and call the "migrate" command within that code. For example:
from django.core.management import call_command
#override_settings(INSTALLED_APPS=['reusable_app'])
MyTestCase(TestCase):
def setUp(self):
call_command('migrate', 'reusable_app')
This is a bit messy, but it might be worth trying. Depending on how things go, you might also have to run django.setup().

What's the best way to make a new migration in a standalone Django app?

I have a Django app which was spun out a while ago from the project it was originally built for, made into a separate standalone app, and put on PyPI (https://pypi.python.org/pypi/mysociety-django-images). The app is built expecting to be run with Django 1.7+.
I'd now like to make a change to one of the models there - just changing a max_length on a field. I can't see anything in the documentation about how to make a new migration for it? Do I need to make an example project and use that, or is there a better way?
You can do this by making a script like:
#!/path/to/your python
import sys
import django
from django.conf import settings
from django.core.management import call_command
settings.configure(
DEBUG=True,
INSTALLED_APPS=(
'django.contrib.contenttypes',
'yourAppName',
),
)
django.setup()
call_command('makemigrations', 'yourAppName')
(this is also how we go about testing our standalone apps).
I don't know which is the better practice between this and create an example project (this appears lighter to do).

Best practices to integrate a django app with other django apps

In Django:
a) What is the best way to test that another app is installed? (By installed I mean to be in INSTALLED_APPS)
b) What is the recommended way to alter the behaviour of the current app accordingly. I understand that:
if "app_to_test" in settings.INSTALLED_APPS:
# Do something given app_to_test is installed
else:
# Do something given app_to_test is NOT installed
is possible, but is there another way? is this the recommended way?
c) What is the recommended practice to import modules that are only required if another app is installed? import then inside of the if block that test for the installed app?
I tend to favour checking INSTALLED_APPS as you have listed in your question.
if DEBUG and 'debug_toolbar' not in INSTALLED_APPS:
INSTALLED_APPS.append('debug_toolbar')
INTERNAL_IPS = ('127.0.0.1',)
This works well when you have settings distributed across different settings files that don't necessarily have knowledge of the other. eg I might have a shared_settings.py which contains a base set of INSTALLED_APPS, then a debug_settings.py which imports shared_settings.py and then adds any additional apps as required.
The same applies for non-settings. For example, if you have Django South installed and want to create introspection rules for South only if it's installed, I would do this:
if 'south' in settings.INSTALLED_APPS:
from south.modelsinspector import add_introspection_rules
# Let South introspect custom fields for migration rules.
add_introspection_rules([], [r"^myapp\.models\.fields\.SomeCustomField"])
As I see it, there's no need to try and import a module if you know there's the possibility that it may not be installed. If the user has listed the module in INSTALLED_APPS then it is expected to be importable.
That?
try:
# Test it
from an_app import something
except ImportError as e:
from another_app import something
#Do something else

How to deal with heroku renaming my root-level app?

Heroku seems to prefer the apps deployed have a certain structure, mostly that the .git and manage.py is at root level and everything else is below that.
I have inherited a Django app I'm trying to deploy for testing purposes and I don't think I can restructure it so I was wondering if I have an alternative.
The structure I've inherited has most of the files in the root folder:
./foo:
__init__.py,
.git,
Procfile,
settings.py,
manage.py,
bar/
models.py, etc
From within foo I can run python manage.py shell and in there from foo.bar import models works.
However, when I push this to Heroku, it puts the root in /app, so foo becomes app and from foo.bar import models no longer works.
Is there any magic settings that would allow me to indicate that app is really foo and allow me to continue without refactoring the app structure and/or all the imports?
Similar question: I think my question is similar to Heroku - Django: Had to change every mentioning of myproject into app to get my site working. How to best avoid this in the future?, except I'm asking if there's anything I can do without changing the site structure.
You can try adding a line to manage.py that modifies sys.path to make sure that foo is in your path:
import sys
PROJECT_DIR = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
if PROJECT_DIR not in sys.path:
sys.path.insert(0, PROJECT_DIR)
Although as a side note its not really good django style to have your toplevel directory be a python module, precisely because it makes deployment more complicated (I'm not POSITIVE that the above will work on heroku). I might recommend just changing your code to import from bar directly and removing foo/__init__.py.
The easiest way would be to delete foo/__init__.py and modify your import statements to import from bar instead of from foo, eg
from foo.bar.models import *
becomes
from bar.models import *
Alternatively you can use relative imports. So if you wanted to import bar.models in bar.views, you'd do
from .models import *
The reason this is an issue is that Django 1.4 changed folder structure for newly created projects. Before 1.4 you'd have a similar structure like you described, minus foo/__init__.py. Heroku adapted Django 1.4's project structure, which is arguably better because it encapsulates the settings within the project and makes it more portable.