How to deal with heroku renaming my root-level app? - django

Heroku seems to prefer the apps deployed have a certain structure, mostly that the .git and manage.py is at root level and everything else is below that.
I have inherited a Django app I'm trying to deploy for testing purposes and I don't think I can restructure it so I was wondering if I have an alternative.
The structure I've inherited has most of the files in the root folder:
./foo:
__init__.py,
.git,
Procfile,
settings.py,
manage.py,
bar/
models.py, etc
From within foo I can run python manage.py shell and in there from foo.bar import models works.
However, when I push this to Heroku, it puts the root in /app, so foo becomes app and from foo.bar import models no longer works.
Is there any magic settings that would allow me to indicate that app is really foo and allow me to continue without refactoring the app structure and/or all the imports?
Similar question: I think my question is similar to Heroku - Django: Had to change every mentioning of myproject into app to get my site working. How to best avoid this in the future?, except I'm asking if there's anything I can do without changing the site structure.

You can try adding a line to manage.py that modifies sys.path to make sure that foo is in your path:
import sys
PROJECT_DIR = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
if PROJECT_DIR not in sys.path:
sys.path.insert(0, PROJECT_DIR)
Although as a side note its not really good django style to have your toplevel directory be a python module, precisely because it makes deployment more complicated (I'm not POSITIVE that the above will work on heroku). I might recommend just changing your code to import from bar directly and removing foo/__init__.py.

The easiest way would be to delete foo/__init__.py and modify your import statements to import from bar instead of from foo, eg
from foo.bar.models import *
becomes
from bar.models import *
Alternatively you can use relative imports. So if you wanted to import bar.models in bar.views, you'd do
from .models import *
The reason this is an issue is that Django 1.4 changed folder structure for newly created projects. Before 1.4 you'd have a similar structure like you described, minus foo/__init__.py. Heroku adapted Django 1.4's project structure, which is arguably better because it encapsulates the settings within the project and makes it more portable.

Related

Heroku application structure for Django

I started a Django 1.6 application locally, with the structure that it came with:
project
django_folder
wsgi.py
my_app
views.py
models.py
...
my_second_app
views.py
models.py
...
Now when I try and deploy on Heroku, it only seems to work with the Procfile in the top directory. I seem to be able to move manage.py around without problems.
My problem is that I have environmental variables set so that in my_app views.py I do the following:
from models import my_model
When I put it on Heroku, it seems that I need to specify my_app
from my_app.models import my_model
Is it possible to set an environmental variable in Heroku so I don't need the myapp in front of models? (My second app is not finished, but partly integrated. It is going to be a pain to change all the imports in both apps). When I add my_app in settings.py is this supposed to affect import paths?
You don't need environment variables, just use relative imports. So in your views.py:
from .models import MyModel
Note the leading dot making this a relative import. In this case there's no need to do anything more. This is the standard pythonic way of doing things.

PyCharm cannot resolve reference in __init__.py with Django project apps

I am at my wits end with this issue, and would love some help resolving this.
I have a Django project with a bunch of sub apps as such:
my_project/
manage.py
my_project/
settings.py
urls.py
wsgi.py
app_root/
__init__.py
app1/
__init__.py
models.py
views.py
urls.py
templates/
[various templates].html
app2/
__init__.py
models.py
[etc]
app3/
[etc]
in my django settings.py i have installed apps as such:
app_root.app1,
app_root.app2,
In PyCharm, I've tried various things but essentially have Content Root as the top "my_project/" and app_root, app1, app2, etc as Source Roots. I've tried just having app_root as the only Source Root, and I've tried having only app1, app2, etc only as Source Roots, but nothing makes any difference.
Everything functions fine. app runs and everything. However, PyCharm has an inability to resolve my apps.
However, if i try this:
import app_root
...
def some_function(self):
app_root.app1.models.My_Model.objects.all()
it will highlight app1 with the error "Cannot find reference 'app1' in '__init__.py'"
This also means it can't do autocomplete anywhere in the path while doing app_root.app1. - it has no idea about models, views, etc. despite having an (empty) __init__.py in every directory.
I also cannot use any refactoring because it always says "Function is not under the source root"
I've spent countless hours trying to get PyCharm to behave but simply cannot find a way to do it. Is there any way this can be done so PyCharm will autocomplete my apps and not keep giving inspection warnings?
I had some similar issues. My solution; within the PyCharm preferences I added a path to app_root in my active Python Interpreter.
After an exchange with the PyCharm folks, here is what I learned:
Django imports all apps in INSTALLED_APPS variable and their models using __import__ for its own purposes.
In your case, it runs
__import__("app_root.app1")
__import__("app_root.app1.models")
After that, you call import app_root and obtain module app_root with app_root.app1 and app_root.app1.models already imported by internal Django code
Fact that Django imports apps and models is Django internals, it is undocumented and may be changed in future releases. We believe you should not rely on it in your production code, nor PyCharm should.
Here is example in bare python (no django):
__import__("encodings.ascii")
import encodings
print (encodings.ascii.Codec) # this code works, but PyCharm marks "ascii" as "unknown module"
So basically, it's not supposed to work as import app_root, but Django funkiness is masking that.

django-admin.py can't find custom settings file

I have several customized django settings, this is basically my project structure:
MainProject/
manage.py
my_project/
settings/
base.py
dev.py
prod.py
I've created the __init__.py files inside the directories to identify them as packages.
I've exported the DJANGO_SETTINGS_MODULE to point to the chosen settings file.
The manage.py command seems to handle it pretty good, I never had problem with it.
The problem is that no matter what I do the django-admin.py is not able to find any settings file. I've tried several possible solution but nothing seems to work so far.
I've used the --settings=my_project.settings.dev
I've edited and hard-coded the manage.py to let it point to the dev.py file
I've created a settings file either inside the MainProject and my_project directories importing the dev file (that in turn imports the base.py).
I've created a settings file that let Django know which files should it use as settings
This is regarding the point 4:
from django.core.management import setup_environ
try:
import my_project.settings.dev as settings
except ImportError:
import sys
sys.stderr.write("Couldn't find the settings.py module.")
sys.exit(1)
setup_environ(settings)
Nothing seems to work so far.
====================================
SOLUTION:
I did not find the exact solution but thanks to a comment on the chosen answer I understood that you can basically use manage.py for everything that you could do in django-admin.py, I didn't know that! Since things DO work for me using manage.py I'm fine with it.
What I recommend doing:
Create a normal settings.py file and import one of the others in there. This avoids duplication of settings shared among the three scenarios + it is actually the recommended way of doing it according to the DRY principle.
Typically, you will only have the set the debug parameter, database settings and private keys in the specific settings files. All the other settings should be shared among all scenarios to avoid forgetting to update one and getting hard to debug errors.
Have you tried to import the dev settings inside the __init__.py from your settings module?
settings/_init_.py
from .dev import *

Django 1.4 config settings

I have been using Django a long time but I recently upgraded to 1.4.
When I created a new project, here is how it was made:
-my_proj
-my_app
-settings.py
-urls.py
-wsgi.py
-venv
-manage.py
-requirements.txt
I am working on my local machine as well as on Heroku. The issue is how I am supposed to refer to modules.
When I am on my local machine, everything seems like it is supposed to be referred to as my_proj.my_app. However, when I am on Heroku, everything has to be referred to as my_app.
Does anyone know of some areas that I should look that might be causing this problem?
Moving your my_app dir to the root directory of your project (where manage.py is located), as it was done in Django tutorial for Django 1.4 will make it a top level module and will allow you to import them just as my_app on your local machine. I would suggest trying this and checking whether Heroku behaves the same.
Alternatively, adding an empty __init__.py, if there there isn't one already, in your my_proj directory might also help Heroku see my_proj as a top-level module, thus allowing you to import your app as my_proj.my_app.
I hope my advice was insightful and helpful.
Good luck!

Django South load fixtures based on environment (development, integration, production)

I'm working on a project that is using Django and South for migrations. I would like to set up some fixtures that would be used to populate the database in some environments (development, demo) but not in others (production). For example, I would like there to be some data in the system so the UI developer has something to work with in the interface they are working on or so we can quickly do a demo for a project manager without having to manually set things up via the admin interface.
While I have found plenty of ways to separate automated testing fixtures from regular fixtures, I have not been able to find anything about loading fixtures based on environment. Is this possible, or is there another way people solve this problem I'm overlooking?
There's not much you can do about initial_data fixtures. However, I've always felt those has less than optimal utility anyways. Rarely do you want the same fixture applied again and again with every call to syncdb or migrate.
If you're using some differently named fixture, you can easily cause it to run with your migration by adding the following to your forwards migration (from the South docs)
from django.core.management import call_command
call_command("loaddata", "my_fixture.json")
So really, all you need is some way to only do that in certain environments. For dev, the easiest path would be to simply rely on DEBUG. So, the previous code becomes:
from django.conf import settings
from django.core.management import call_command
if settings.DEBUG:
call_command("loaddata", "dev_fixture.json")
If you need greater control, you can create some sort of setting that will be different in each local_settings.py (or whatever methodology you use to customize settings based on environment). For example:
# local_settings.py
ENV = 'staging'
# migration
from django.conf import settings
from django.core.management import call_command
if settings.ENV == 'staging':
call_command("loaddata", "staging_fixture.json")