couldn't find that process type - django

I am having trouble with the "couldn't find that process type" error on Heroku. I submitted a ticket Thursday but still don't have a solution and they are not open for folks like me on the weekend, so I am posting here.
Please note:
This is a Django app
It runs locally on both heroku local and django runserver, but not heroku itself.
I was following a solution I read here:
Couldn't find that process type, Heroku
which was to take the Procfile out, do a commit, then put it back, and do a commit, and it should work.
The output from the push to Heroku was the same:
remote: Procfile declares types -> (none)
So Heroku didn't even notice that the Procfile was missing?!
Then I put the Procfile back and I still get the same error:
2019-06-08T18:49:34.853568+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path="/" host=lj-stage.herokuapp.com request_id=d592d4e6-7558-4003-ab55-b3081502f5cf fwd="50.203.248.222" dyno= connect= service= status=503 bytes= protocol=http
I've also read about multiple buildpacks needing to be in a certain order, which might cause this error, but I only have one:
(hattie-nHCNXwaX) malikarumi#Tetuoan2:~/Projects/hattie/hattie$ heroku buildpacks
› Warning: heroku update available from 7.7.8 to 7.24.4
=== lj-stage Buildpack URL
heroku/python
Furthermore, I did a word search through the Python buildpack on GitHub and didn't see anything to indicate the buildpack is doing anything other than rely on the Procfile for process types.
I also tried heroku ps:scale web=1, which gives the 'couldn't find that process type' error.
There are several other similar questions here on SO, a lot of them don't have answers, and I tried the ones that did. Any assistance greatly appreciated.
update:
As requested, here is my tree. The names next to Procfile are Django models:
hattie-nHCNXwaX) malikarumi#Tetuoan2:~/Projects/hattie$ tree -L 2
├── =2.2
├── hattie
│   ├── academy
│   ├── account
│   ├── airflow_tutorial_script.py
│   ├── bar
│   ├── bench
│   ├── caseAT
│   ├── codeAT
│   ├── commentaryAT
│   ├── consultant
│   ├── contact_form
│   ├── government
│   ├── hattie
│   ├── hattie.sublime-project
│   ├── hattie.sublime-workspace
│   ├── How It Works - Sort Sequences
│   ├── legislature
│   ├── manage.py
│   ├── pac
│   ├── people
│   ├── post
│   ├── Procfile
│   ├── static
│   ├── staticfiles
│   ├── templates
│   └── utilities
├── hattie pipenv
├── pipenv for refactor4
├── Pipfile
├── Pipfile.lock
├── refactor4.sublime-project
└── refactor4.sublime-workspace
And here is the content of my Procfile:
web: gunicorn hattie.wsgi --log-file -

Your Procfile must be in the root of your repository. Move it there and redeploy.

Related

Django best practice for scripts organizing

I had the project on Django 3.1 with the following layout:
.
├── app
│   ├── app
│   │   ├── asgi.py
│   │   ├── __init__.py
│   │   ├── settings.py
│   │   ├── urls.py
│   │   └── wsgi.py
│   ├── core
│   │   ├── admin.py
│   │   ├── apps.py
│   │   ├── fixtures
│   │   │   ├── Client.json
│   │   │   └── DataFeed.json
│   │   ├── __init__.py
│   │   ├── migrations
│   │   │   ├── 0001_initial.py
│   │   │   ├── 0002_auto_20201009_0950.py
│   │   │   └── __init__.py
│   │   ├── models.py
│   │   └── tests
│   │   └── __init__.py
│   └── manage.py
I want to add 2 scripts to this project:
download_xml.py - to check and download .xml files from external sources by schedule (every ~30 min)
update_db_info.py - to be invoked by download_xml.py and transfer data from downloaded xml to the database
What is the best django practice for organizing a placement for this kind of scripts?
My ideas:
just create scripts folder inside of an app/core and put scripts there. Invoke them using cron
run python manage.py startapp db_update
so the new app in django will be created. I will remove migrations, views, models etc from it and put scripts there. use cron again
Create app/core/management/commands folder and put scripts there. Call them by cron using python manage.py download_xml && python manage.py download_xml update_db_info
Option 3 (mostly)
However if download_xml.py doesn't use or rely on Django, I would put it in a scripts directory outside of the Django project (but still in source control). You might decide not to do this if the script does need to be deployed with your app. It doesn't need to be a management command though.
update_db_info.py definitely sounds like it would be best suited as a management command.

Virtualenv for a project with multiple modules

I am trying to build a project from scratch in python 2, it has structure shown below. In past I have created projects with a single hierarchy, so there would be single virtualenv, but this project has multiple subpackages, what is the best practice to be followed: there should be a single virtualenv inside project_root directory shared by all subpackages in it, or there should be separate virtualenv for each subpackage?
project_root/
├── commons
│   ├── hql_helper.py
│   ├── hql_helper.pyc
│   ├── __init__.py
│   └── sample_HQL.hql
├── fl_wtchr
│   ├── fl_wtchr_test.py
│   ├── fl_wtchr_test.pyc
│   ├── __init__.py
│   ├── meta_table.hql
│   ├── requirements.txt
│   ├── sftp_tmp
│   ├── sql_test.py
│   └── sql_test.pyc
├── qry_exec
│   ├── act_qry_exec_script.py
│   ├── hive_db.logs
│   ├── params.py
│   └── params.pyc
├── sqoop_a
│   ├── __init__.py
│   └── sqoop.py
└── test.py
A case could be made for creating separate virtual environments for each module; but fundamentally, you want and expect all this code to eventually be able to run without a virtualenv at all. All your modules should be able to run with whatever you install into the top-level virtual environment and so that's what you should primarily be testing against.

Django runs successfully at localhost but 500 on AWS EB

I just tried writing a simple Django application hosted on AWS Elastic Beanstalk. I can run the server successfully on my localhost. However, when I deploy it on EB, it failed with an 500 error.
Here is my project tree
.
├── README.md
├── db.sqlite3
├── djangosite
│   ├── __init__.py
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
├── intro
│   ├── __init__.py
│   ├── admin.py
│   ├── apps.py
│   ├── migrations
│   │   └── __init__.py
│   ├── models.py
│   ├── tests.py
│   └── views.py
├── manage.py
├── requirement.txt
└── templates
└── index.html
I didn't find the log with the correct time in the logs. Usually 500 means there may be something wrong with my codes. But it runs well if I start the server locally
$ python manage.py runserver
I tried to us eb ssh to login the instance and find there is no django in /opt/current/app where my codes sit.
But I truly add Django==1.9.8 to requirement.txt. It seems eb do not installed django. It is also not in /opt/python/run/venv/lib/python2.7/site-packages/.
(I don't have enough reputation to comment)
I'm assuming that you're application starts at all on the production server(you don't mention whether is does).
Did you change Debug=False on the production server? Then uncaught exceptions in cause a 500 response. While having Debug=True in development(locally) returns you the Debug screen.

HTMLBars how to get started?

Is there any guide how to start with HTMLBars? I am following "building HTMLBars" section but finally I am stuck. I have run building tool and now I have files in my dist directory like this:
.
├── htmlbars-compiler.amd.js
├── htmlbars-runtime.amd.js
├── morph.amd.js
├── test
│   ├── htmlbars-compiler-tests.amd.js
│   ├── htmlbars-runtime-tests.amd.js
│   ├── index.html
│   ├── loader.js
│   ├── morph-tests.amd.js
│   ├── packages-config.js
│   ├── qunit.css
│   └── qunit.js
└── vendor
├── handlebars.amd.js
└── simple-html-tokenizer.amd.js
Which should I add to my ember project and is that all or have I to do something more? Is this library ready or it is still unusable for ember?
Not even close to ready yet, I'd love to give more info, but there really isn't any. Last I heard they wanted it as a beta in 1.9, but we'll see.

Buildout, Django, and Passenger

I have a Django project in Buildout which I'd like to set up with my nginx server over Phusion Passenger.
The documentation for doing this doesn't seem to exist.
There seems to be a need for creating a passenger_wsgi.py file for setting up the WSGI environment, however I'm not sure how that will work.
Since Buildout does its own internal hacks with the Python path, how can I create and supply this file, and where in my project should I put it?
My project looks like this:
.
├── bin
│   ├── buildout
│   ├── django
│   ├── django.wsgi
│   ├── gunicorn
│   ├── ipython
│   ├── multiple-part-upload.py
│   ├── nosetests
│   ├── python
│   └── test
├── conf
│   ├── deploy
│   ├── shared
│   └── vagrant
├── src
│   ├── myproject
│   └── myproject.egg-info
├── bootstrap.py
├── bower.json
├── buildout.cfg
├── README.mkd
├── setup.py
└── Vagrantfile
Where should I put passenger_wsgi.py so that a) Passenger will find it and b) my Buildout eggs will be included in the path?