Django runs successfully at localhost but 500 on AWS EB - django

I just tried writing a simple Django application hosted on AWS Elastic Beanstalk. I can run the server successfully on my localhost. However, when I deploy it on EB, it failed with an 500 error.
Here is my project tree
.
├── README.md
├── db.sqlite3
├── djangosite
│   ├── __init__.py
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
├── intro
│   ├── __init__.py
│   ├── admin.py
│   ├── apps.py
│   ├── migrations
│   │   └── __init__.py
│   ├── models.py
│   ├── tests.py
│   └── views.py
├── manage.py
├── requirement.txt
└── templates
└── index.html
I didn't find the log with the correct time in the logs. Usually 500 means there may be something wrong with my codes. But it runs well if I start the server locally
$ python manage.py runserver
I tried to us eb ssh to login the instance and find there is no django in /opt/current/app where my codes sit.
But I truly add Django==1.9.8 to requirement.txt. It seems eb do not installed django. It is also not in /opt/python/run/venv/lib/python2.7/site-packages/.

(I don't have enough reputation to comment)
I'm assuming that you're application starts at all on the production server(you don't mention whether is does).
Did you change Debug=False on the production server? Then uncaught exceptions in cause a 500 response. While having Debug=True in development(locally) returns you the Debug screen.

Related

Django best practice for scripts organizing

I had the project on Django 3.1 with the following layout:
.
├── app
│   ├── app
│   │   ├── asgi.py
│   │   ├── __init__.py
│   │   ├── settings.py
│   │   ├── urls.py
│   │   └── wsgi.py
│   ├── core
│   │   ├── admin.py
│   │   ├── apps.py
│   │   ├── fixtures
│   │   │   ├── Client.json
│   │   │   └── DataFeed.json
│   │   ├── __init__.py
│   │   ├── migrations
│   │   │   ├── 0001_initial.py
│   │   │   ├── 0002_auto_20201009_0950.py
│   │   │   └── __init__.py
│   │   ├── models.py
│   │   └── tests
│   │   └── __init__.py
│   └── manage.py
I want to add 2 scripts to this project:
download_xml.py - to check and download .xml files from external sources by schedule (every ~30 min)
update_db_info.py - to be invoked by download_xml.py and transfer data from downloaded xml to the database
What is the best django practice for organizing a placement for this kind of scripts?
My ideas:
just create scripts folder inside of an app/core and put scripts there. Invoke them using cron
run python manage.py startapp db_update
so the new app in django will be created. I will remove migrations, views, models etc from it and put scripts there. use cron again
Create app/core/management/commands folder and put scripts there. Call them by cron using python manage.py download_xml && python manage.py download_xml update_db_info
Option 3 (mostly)
However if download_xml.py doesn't use or rely on Django, I would put it in a scripts directory outside of the Django project (but still in source control). You might decide not to do this if the script does need to be deployed with your app. It doesn't need to be a management command though.
update_db_info.py definitely sounds like it would be best suited as a management command.

How do you import modules from google.cloud for use in AWS Lambda?

I'm trying to run a script on AWS Lambda that sends data to Google Cloud Storage (GCS) at the end. When I do so locally, it works, but when I run the script on AWS Lambda, importing the GCS client library fails (other imports work fine though). Anyone know why?
Here's an excerpt of the script's imports:
# main_script.py
import robobrowser
from google.cloud import storage
# ...generate data...
# ...send data to storage...
The error message from AWS:
Unable to import module 'main_script': No module named google.cloud
To confirm that the problem is with the google client library import, I ran a version of this script in AWS Lambda with and without the GCS import (commenting out the later references to it) and the script proceeds as usual without import-related errors when the GCS client library import is commented out. Other imports (robobrowser) work fine at all times, locally and on AWS.
I'm using a virtualenv with python set to 2.7.6. To deploy to AWS Lambda, I'm going through the following manual process:
zip the pip packages for the virtual environment:
cd ~/.virtualenvs/{PROJECT_NAME}/lib/python2.7/site-packages
zip -r9 ~/Code/{PROJECT_NAME}.zip *
zip the contents of the project, adding them to the same zip as above:
zip -g ~/Code/{PROJECT_NAME}.zip *
upload the zip to AWS and test using the web console
Here is a subset of the result from running tree inside ~/.virtualenvs/{PROJECT_NAME}/lib/python2.7/site-packages:
...
│
├── google
│   ├── ...
│   ├── cloud
│   │   ├── _helpers.py
│   │   ├── _helpers.pyc
│   │   ├── ...
│   │   ├── bigquery
│   │   │   ├── __init__.py
│   │   │   ├── __init__.pyc
│   │   │   ├── _helpers.py
│   │   │   ├── _helpers.pyc
│   │   ├── ...
│   │   ├── storage
│   │   │   ├── __init__.py
│   │   │   ├── __init__.pyc
│   │   │   ├── _helpers.py
│   │   │   ├── _helpers.pyc
├── robobrowser
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── browser.py
│   ├── browser.pyc
│   ├── ...
...
Unzipping and inspecting the contents of the zip confirms this structure is kept in tact during the zipping process.
I was able to solve this problem by adding __init__.py to the google and google/cloud directories in the pip installation for google-cloud. Despite the current google-cloud package (0.24.0) saying it supports python 2.7, the package structure for this as downloaded using pip seems to cause problems for me.
In the interest of reporting everything, I also had a separate problem after doing this: AWS lambda had trouble importing the main script as a module. I fixed this by recreating the repo step-by-step from scratch. Wasn't able to pinpoint the cause of this 2nd issue, but hey. Computers.

Buildout, Django, and Passenger

I have a Django project in Buildout which I'd like to set up with my nginx server over Phusion Passenger.
The documentation for doing this doesn't seem to exist.
There seems to be a need for creating a passenger_wsgi.py file for setting up the WSGI environment, however I'm not sure how that will work.
Since Buildout does its own internal hacks with the Python path, how can I create and supply this file, and where in my project should I put it?
My project looks like this:
.
├── bin
│   ├── buildout
│   ├── django
│   ├── django.wsgi
│   ├── gunicorn
│   ├── ipython
│   ├── multiple-part-upload.py
│   ├── nosetests
│   ├── python
│   └── test
├── conf
│   ├── deploy
│   ├── shared
│   └── vagrant
├── src
│   ├── myproject
│   └── myproject.egg-info
├── bootstrap.py
├── bower.json
├── buildout.cfg
├── README.mkd
├── setup.py
└── Vagrantfile
Where should I put passenger_wsgi.py so that a) Passenger will find it and b) my Buildout eggs will be included in the path?

Django ImportError when attempting to use a setting in a settings directory

Alright, so I've been wrestling with this problem for a good two hours now.
I want to use a settings module, local.py, when I run my server locally via this command:
$ python manage.py runserver --settings=mysite.settings.local
However, I see this error when I try to do this:
ImportError: Could not import settings 'mysite.settings.local' (Is it on sys.path?): No module named base
This is how my directory is laid out:
├── manage.py
├── media
├── myapp
│   ├── __init__.py
│   ├── models.py
│   ├── tests.py
│   └── views.py
└── mysite
├── __init__.py
├── __init__.pyc
├── settings
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── local.py
│   └── local.pyc
├── urls.py
└── wsgi.py
Similar questions have been asked, but their solutions have not worked for me.
One suggestion was to include an initialization file in the settings folder, but, as you can see, this is what I have already done.
Need a hand here!
It looks like that django is not finding "mysite.settings.local" package because it is not in your PYTHONPATH.
You have to add sys.path in your manage.py file, following should work for you :-
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
It seems like the local.py module imports from base.py, you probably have something like:
from base import *
at the top of your local settings.
But the base.py settings module is not there, hence the error.

How can I correctly set DJANGO_SETTINGS_MODULE for my Django project (I am using virtualenv)?

I am having some trouble setting the DJANGO_SETTINGS_MODULE for my Django project.
I have a directory at ~/dev/django-project. In this directory I have a virtual environment which I have set up with virtualenv, and also a django project called "blossom" with an app within it called "onora". Running tree -L 3 from ~/dev/django-project/ shows me the following:
.
├── Procfile
├── blossom
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── fixtures
│   │   └── initial_data_test.yaml
│   ├── manage.py
│   ├── onora
│   │   ├── __init__.py
│   │   ├── __init__.pyc
│   │   ├── admin.py
│   │   ├── admin.pyc
│   │   ├── models.py
│   │   ├── models.pyc
│   │   ├── tests.py
│   │   └── views.py
│   ├── settings.py
│   ├── settings.pyc
│   ├── sqlite3-database
│   ├── urls.py
│   └── urls.pyc
├── blossom-sqlite3-db2
├── requirements.txt
└── virtual_environment
├── bin
│   ├── activate
│   ├── activate.csh
│   ├── activate.fish
│   ├── activate_this.py
│   ├── django-admin.py
│   ├── easy_install
│   ├── easy_install-2.7
│   ├── gunicorn
│   ├── gunicorn_django
│   ├── gunicorn_paster
│   ├── pip
│   ├── pip-2.7
│   ├── python
│   └── python2.7 -> python
├── include
│   └── python2.7 -> /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
└── lib
└── python2.7
I am trying to dump my data from the database with the command
django-admin.py dumpdata
My approach is to run cd ~/dev/django-project and then run source virtual_environment/bin/activate and then run django-admin.py dumpdata
However, I am getting the following error:
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I did some googling and found this page: https://docs.djangoproject.com/en/dev/topics/settings/#designating-the-settings
which tell me that
When you use Django, you have to tell it which settings you're using.
Do this by using an environment variable, DJANGO_SETTINGS_MODULE. The
value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g.
mysite.settings. Note that the settings module should be on the Python
import search path.
Following a suggestion at Setting DJANGO_SETTINGS_MODULE under virtualenv? I appended the lines
export DJANGO_SETTINGS_MODULE="blossom.settings"
echo $DJANGO_SETTINGS_MODULE
to virtual_environment/bin/activate. Now, when I run the activate command in order to activate the virtual environment, I get output reading:
DJANGO_SETTINGS_MODULE set to blossom.settings
This looks good to me, but now the problem I have is that running
django-admin.py dumpdata
returns the following error:
ImportError: Could not import settings 'blossom.settings' (Is it on sys.path?): No module named blossom.settings
What am I doing wrong? How can I check thesys.path? How is this supposed to work?
Thanks.
Don't run django-admin.py for anything other than the initial project creation. For everything after that, use manage.py, which takes care of the finding the settings.
I just encountered the same error, and eventually managed to work out what was going on (the big clue was (Is it on sys.path?) in the ImportError).
You need add your project directory to PYTHONPATH — this is what the documentation means by
Note that the settings module should be on the Python import search path.
To do so, run
$ export PYTHONPATH=$PYTHONPATH:$PWD
from the ~/dev/django-project directory before you run django-admin.py.
You can add this command (replacing $PWD with the actual path to your project, i.e. ~/dev/django-project) to your virtualenv's source script. If you choose to advance to virtualenvwrapper at some point (which is designed for this kind of situation), you can add the export PY... line to the auto-generated postactivate hook script.
mkdjangovirtualenv automates this even further, adding the appropriate entry to the Python path for you, but I have not tested it myself.
On unix-like machine you can simply alias virtualenv like this and use alias instead of typing everytime:
.bashrc
alias cool='source /path_to_ve/bin/activate; export DJANGO_SETTINGS_MODULE=django_settings_folder.settings; cd path_to_django_project; export PYTHONPATH=$PYTHONPATH:$PWD'
My favourite alternative is passing settings file as runtime parameter to manage.py in a python package syntax, e.g:
python manage.py runserver --settings folder.filename
more info django docs
I know there are plenty answers, but this one worked for me just for the record.
Navigate to your .virtual_env folder where all the virtual environments are.
Go to the environment folder specific to your project
Append export DJANGO_SETTINGS_MODULE=<django_project>.settings
or export DJANGO_SETTINGS_MODULE=<django_project>.settings.local if you are using a separate settings file stored in a settings folder.
Yet another way to do deal with this issue is to use the python dotenv package and include PYTHONPATH and DJANGO_SETTINGS_MODULE in the .env file along with your other environment variables. Then modify your manage.py and wsgi.py to load them as stated in the instructions.
from dotenv import load_dotenv
load_dotenv()
I had similar error while working on windows machine. My problem was using wrong debug configuration. Use Python:django as your debug config option.
First ensure you've exported/set django_settings_module correctly here.