I have a project with the structure similar to what is described in Two Scoops of Django.
Namely:
1. photoarchive_project is repository root (where .git lives).
2. The project itself is photoarchive.
3. Config files are separate for separate realities.
The traceback and other info is below.
The file runtime.txt is situated next to .git directory. That is in the very directory where git is initialized.
The problem is: it can't even determine that python should be applied. Could you give me a kick here?
.git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = ssh://git#bitbucket.org/Kifsif/photoarchive.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
[remote "heroku"]
url = https://git.heroku.com/powerful-plains-97572.git
fetch = +refs/heads/*:refs/remotes/heroku/*
traceback
(photoarchive) michael#ThinkPad:~/workspace/photoarchive_project$ git push
heroku master
Counting objects: 3909, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3617/3617), done.
Writing objects: 100% (3909/3909), 686.44 KiB | 0 bytes/s, done.
Total 3909 (delta 2260), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote:
remote: ! No default language could be detected for this app.
remote: HINT: This occurs when Heroku cannot detect the buildpack to use for this application automatically.
remote: See https://devcenter.heroku.com/articles/buildpacks
remote:
remote: ! Push failed
remote: Verifying deploy...
remote:
remote: ! Push rejected to powerful-plains-97572.
remote:
To https://git.heroku.com/powerful-plains-97572.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/powerful-plains-97572.git'
tree
(photoarchive) michael#ThinkPad:~/workspace/photoarchive_project$ tree
.
├── docs
├── media
├── photoarchive
│ ├── config
│ │ ├── settings
│ │ │ ├── base.py
│ │ │ ├── constants.py
│ │ │ ├── heroku.py
│ │ │ ├── __init__.py
│ │ │ ├── local.py
│ │ │ └── production.py
│ └── manage.py
├── .git
├── .gitignore
├── Procfile
└── runtime.txt
runtime.txt
python-3.6.1
I
You need to define a requirements.txt inside the root of your project folder. This file should contain a list of all your project dependencies.
You can generate this file on your local development machine by running:
$ pip freeze > requirements.txt
Then checking it into version control, and pushing it to Heroku.
Heroku looks for this file to determine that your app is, in fact, a Python application =)
Related
I am having trouble with the "couldn't find that process type" error on Heroku. I submitted a ticket Thursday but still don't have a solution and they are not open for folks like me on the weekend, so I am posting here.
Please note:
This is a Django app
It runs locally on both heroku local and django runserver, but not heroku itself.
I was following a solution I read here:
Couldn't find that process type, Heroku
which was to take the Procfile out, do a commit, then put it back, and do a commit, and it should work.
The output from the push to Heroku was the same:
remote: Procfile declares types -> (none)
So Heroku didn't even notice that the Procfile was missing?!
Then I put the Procfile back and I still get the same error:
2019-06-08T18:49:34.853568+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path="/" host=lj-stage.herokuapp.com request_id=d592d4e6-7558-4003-ab55-b3081502f5cf fwd="50.203.248.222" dyno= connect= service= status=503 bytes= protocol=http
I've also read about multiple buildpacks needing to be in a certain order, which might cause this error, but I only have one:
(hattie-nHCNXwaX) malikarumi#Tetuoan2:~/Projects/hattie/hattie$ heroku buildpacks
› Warning: heroku update available from 7.7.8 to 7.24.4
=== lj-stage Buildpack URL
heroku/python
Furthermore, I did a word search through the Python buildpack on GitHub and didn't see anything to indicate the buildpack is doing anything other than rely on the Procfile for process types.
I also tried heroku ps:scale web=1, which gives the 'couldn't find that process type' error.
There are several other similar questions here on SO, a lot of them don't have answers, and I tried the ones that did. Any assistance greatly appreciated.
update:
As requested, here is my tree. The names next to Procfile are Django models:
hattie-nHCNXwaX) malikarumi#Tetuoan2:~/Projects/hattie$ tree -L 2
├── =2.2
├── hattie
│ ├── academy
│ ├── account
│ ├── airflow_tutorial_script.py
│ ├── bar
│ ├── bench
│ ├── caseAT
│ ├── codeAT
│ ├── commentaryAT
│ ├── consultant
│ ├── contact_form
│ ├── government
│ ├── hattie
│ ├── hattie.sublime-project
│ ├── hattie.sublime-workspace
│ ├── How It Works - Sort Sequences
│ ├── legislature
│ ├── manage.py
│ ├── pac
│ ├── people
│ ├── post
│ ├── Procfile
│ ├── static
│ ├── staticfiles
│ ├── templates
│ └── utilities
├── hattie pipenv
├── pipenv for refactor4
├── Pipfile
├── Pipfile.lock
├── refactor4.sublime-project
└── refactor4.sublime-workspace
And here is the content of my Procfile:
web: gunicorn hattie.wsgi --log-file -
Your Procfile must be in the root of your repository. Move it there and redeploy.
I'm trying to run a script on AWS Lambda that sends data to Google Cloud Storage (GCS) at the end. When I do so locally, it works, but when I run the script on AWS Lambda, importing the GCS client library fails (other imports work fine though). Anyone know why?
Here's an excerpt of the script's imports:
# main_script.py
import robobrowser
from google.cloud import storage
# ...generate data...
# ...send data to storage...
The error message from AWS:
Unable to import module 'main_script': No module named google.cloud
To confirm that the problem is with the google client library import, I ran a version of this script in AWS Lambda with and without the GCS import (commenting out the later references to it) and the script proceeds as usual without import-related errors when the GCS client library import is commented out. Other imports (robobrowser) work fine at all times, locally and on AWS.
I'm using a virtualenv with python set to 2.7.6. To deploy to AWS Lambda, I'm going through the following manual process:
zip the pip packages for the virtual environment:
cd ~/.virtualenvs/{PROJECT_NAME}/lib/python2.7/site-packages
zip -r9 ~/Code/{PROJECT_NAME}.zip *
zip the contents of the project, adding them to the same zip as above:
zip -g ~/Code/{PROJECT_NAME}.zip *
upload the zip to AWS and test using the web console
Here is a subset of the result from running tree inside ~/.virtualenvs/{PROJECT_NAME}/lib/python2.7/site-packages:
...
│
├── google
│ ├── ...
│ ├── cloud
│ │ ├── _helpers.py
│ │ ├── _helpers.pyc
│ │ ├── ...
│ │ ├── bigquery
│ │ │ ├── __init__.py
│ │ │ ├── __init__.pyc
│ │ │ ├── _helpers.py
│ │ │ ├── _helpers.pyc
│ │ ├── ...
│ │ ├── storage
│ │ │ ├── __init__.py
│ │ │ ├── __init__.pyc
│ │ │ ├── _helpers.py
│ │ │ ├── _helpers.pyc
├── robobrowser
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── browser.py
│ ├── browser.pyc
│ ├── ...
...
Unzipping and inspecting the contents of the zip confirms this structure is kept in tact during the zipping process.
I was able to solve this problem by adding __init__.py to the google and google/cloud directories in the pip installation for google-cloud. Despite the current google-cloud package (0.24.0) saying it supports python 2.7, the package structure for this as downloaded using pip seems to cause problems for me.
In the interest of reporting everything, I also had a separate problem after doing this: AWS lambda had trouble importing the main script as a module. I fixed this by recreating the repo step-by-step from scratch. Wasn't able to pinpoint the cause of this 2nd issue, but hey. Computers.
I just tried writing a simple Django application hosted on AWS Elastic Beanstalk. I can run the server successfully on my localhost. However, when I deploy it on EB, it failed with an 500 error.
Here is my project tree
.
├── README.md
├── db.sqlite3
├── djangosite
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── intro
│ ├── __init__.py
│ ├── admin.py
│ ├── apps.py
│ ├── migrations
│ │ └── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
├── manage.py
├── requirement.txt
└── templates
└── index.html
I didn't find the log with the correct time in the logs. Usually 500 means there may be something wrong with my codes. But it runs well if I start the server locally
$ python manage.py runserver
I tried to us eb ssh to login the instance and find there is no django in /opt/current/app where my codes sit.
But I truly add Django==1.9.8 to requirement.txt. It seems eb do not installed django. It is also not in /opt/python/run/venv/lib/python2.7/site-packages/.
(I don't have enough reputation to comment)
I'm assuming that you're application starts at all on the production server(you don't mention whether is does).
Did you change Debug=False on the production server? Then uncaught exceptions in cause a 500 response. While having Debug=True in development(locally) returns you the Debug screen.
I Made a small Django app; I want to deploy it on AWS. I followed the commands here . Now when I do eb create it fails saying
ERROR: Your requirements.txt is invalid. Snapshot your logs for details.
ERROR: [Instance: i-05fde0dc] Command failed on instance. Return code: 1 Output: (TRUNCATED)...)
File "/usr/lib64/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
Detailed logs are here. My database in postgresql, for that do I have to run separate RDS instance ?
My config.yml
branch-defaults:
master:
environment: feedy2-dev
group_suffix: null
global:
application_name: feedy2
default_ec2_keyname: aws-eb
default_platform: Python 2.7
default_region: us-west-2
profile: eb-cli
sc: git
My 01-django-eb.config
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "feedy2.settings"
PYTHONPATH: "/opt/python/current/app/feedy2:$PYTHONPATH"
"aws:elasticbeanstalk:container:python":
WSGIPath: "feedy2/feedy2/wsgi.py"
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
My directory structure :
.
├── feedy2
│ ├── businesses
│ │
│ ├── customers
│ │
│ ├── db.sqlite3
│ ├── feedy2
│ │ ├── __init__.py
│ │ ├── __init__.pyc
│ │ ├── settings.py
│ │ ├── settings.pyc
│ │ ├── urls.py
│ │ ├── urls.pyc
│ │ ├── wsgi.py
│ │ └── wsgi.pyc
│ ├── manage.py
│ ├── questions
│ │
│ ├── static
│ ├── surveys
│ └── templates
├── readme.md
└── requirements.txt
You truncated the relevant part of output but it's in the pastebin link:
Collecting psycopg2==2.6.1 (from -r /opt/python/ondeck/app/requirements.txt (line 20))
Using cached psycopg2-2.6.1.tar.gz
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
Error: pg_config executable not found.
You need to install the postgresql[version]-devel package. Put the following in .ebextensions/packages.config'.
packages:
yum:
postgresql94-devel: []
Source: Psycopg2 on Amazon Elastic Beanstalk
I am having some trouble setting the DJANGO_SETTINGS_MODULE for my Django project.
I have a directory at ~/dev/django-project. In this directory I have a virtual environment which I have set up with virtualenv, and also a django project called "blossom" with an app within it called "onora". Running tree -L 3 from ~/dev/django-project/ shows me the following:
.
├── Procfile
├── blossom
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── fixtures
│ │ └── initial_data_test.yaml
│ ├── manage.py
│ ├── onora
│ │ ├── __init__.py
│ │ ├── __init__.pyc
│ │ ├── admin.py
│ │ ├── admin.pyc
│ │ ├── models.py
│ │ ├── models.pyc
│ │ ├── tests.py
│ │ └── views.py
│ ├── settings.py
│ ├── settings.pyc
│ ├── sqlite3-database
│ ├── urls.py
│ └── urls.pyc
├── blossom-sqlite3-db2
├── requirements.txt
└── virtual_environment
├── bin
│ ├── activate
│ ├── activate.csh
│ ├── activate.fish
│ ├── activate_this.py
│ ├── django-admin.py
│ ├── easy_install
│ ├── easy_install-2.7
│ ├── gunicorn
│ ├── gunicorn_django
│ ├── gunicorn_paster
│ ├── pip
│ ├── pip-2.7
│ ├── python
│ └── python2.7 -> python
├── include
│ └── python2.7 -> /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
└── lib
└── python2.7
I am trying to dump my data from the database with the command
django-admin.py dumpdata
My approach is to run cd ~/dev/django-project and then run source virtual_environment/bin/activate and then run django-admin.py dumpdata
However, I am getting the following error:
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I did some googling and found this page: https://docs.djangoproject.com/en/dev/topics/settings/#designating-the-settings
which tell me that
When you use Django, you have to tell it which settings you're using.
Do this by using an environment variable, DJANGO_SETTINGS_MODULE. The
value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g.
mysite.settings. Note that the settings module should be on the Python
import search path.
Following a suggestion at Setting DJANGO_SETTINGS_MODULE under virtualenv? I appended the lines
export DJANGO_SETTINGS_MODULE="blossom.settings"
echo $DJANGO_SETTINGS_MODULE
to virtual_environment/bin/activate. Now, when I run the activate command in order to activate the virtual environment, I get output reading:
DJANGO_SETTINGS_MODULE set to blossom.settings
This looks good to me, but now the problem I have is that running
django-admin.py dumpdata
returns the following error:
ImportError: Could not import settings 'blossom.settings' (Is it on sys.path?): No module named blossom.settings
What am I doing wrong? How can I check thesys.path? How is this supposed to work?
Thanks.
Don't run django-admin.py for anything other than the initial project creation. For everything after that, use manage.py, which takes care of the finding the settings.
I just encountered the same error, and eventually managed to work out what was going on (the big clue was (Is it on sys.path?) in the ImportError).
You need add your project directory to PYTHONPATH — this is what the documentation means by
Note that the settings module should be on the Python import search path.
To do so, run
$ export PYTHONPATH=$PYTHONPATH:$PWD
from the ~/dev/django-project directory before you run django-admin.py.
You can add this command (replacing $PWD with the actual path to your project, i.e. ~/dev/django-project) to your virtualenv's source script. If you choose to advance to virtualenvwrapper at some point (which is designed for this kind of situation), you can add the export PY... line to the auto-generated postactivate hook script.
mkdjangovirtualenv automates this even further, adding the appropriate entry to the Python path for you, but I have not tested it myself.
On unix-like machine you can simply alias virtualenv like this and use alias instead of typing everytime:
.bashrc
alias cool='source /path_to_ve/bin/activate; export DJANGO_SETTINGS_MODULE=django_settings_folder.settings; cd path_to_django_project; export PYTHONPATH=$PYTHONPATH:$PWD'
My favourite alternative is passing settings file as runtime parameter to manage.py in a python package syntax, e.g:
python manage.py runserver --settings folder.filename
more info django docs
I know there are plenty answers, but this one worked for me just for the record.
Navigate to your .virtual_env folder where all the virtual environments are.
Go to the environment folder specific to your project
Append export DJANGO_SETTINGS_MODULE=<django_project>.settings
or export DJANGO_SETTINGS_MODULE=<django_project>.settings.local if you are using a separate settings file stored in a settings folder.
Yet another way to do deal with this issue is to use the python dotenv package and include PYTHONPATH and DJANGO_SETTINGS_MODULE in the .env file along with your other environment variables. Then modify your manage.py and wsgi.py to load them as stated in the instructions.
from dotenv import load_dotenv
load_dotenv()
I had similar error while working on windows machine. My problem was using wrong debug configuration. Use Python:django as your debug config option.
First ensure you've exported/set django_settings_module correctly here.