To understand what I'm talking about, I call GitHub stat this line:
I have my Django project on GitHub repository and, I use Heroku to deploy this project. To deploy my project to Heroku, I need first to run python manage.py collectstatic that will generate a lot of CSS and JS like on screenshot above.
I want to hide this folder not ignore, because Heroku needs it to work properly.
UPD 1:
So, I created new branch called debug. debug branch is identical with master, but, without staticfiles folder. And when i start Heroku with this branch, as i said, it gives me an 500 Server Error. Ofcourse, I runned python manage.py collectstatic before start.
UPD 2:
After restarting all Heroku dynos (heroku ps:restart in CLI), all works fine without pre compiled staticfiles.
This:
To deploy my project to Heroku, I need first to run python manage.py collectstatic
is not true. Heroku will run collectstatic for you when you deploy. You do not need to run it before deploying, and you definitely do not need to add the destination directory to git.
Related
I made some updates to a project: add 1 admin model, add 1 template
I'm using wagtail. I pulled the updates to my server, ran migrations, got success. I restart nginx and gunicorn, I even rebooted the server.
When I go to the wagtail admin, my adminmodel is missing (it exists locally). when I go to create a new page, my template is available, but when I select it I get taken to a wagtail 404 page.
Ubuntu 20.04
ngnix
gunicorn
django/wagtail
digital ocean vpc
digital ocean postgres database cluster
The site works like normal, only a template is available I can't select, and the Model, that migrated, isn't available and isn't showing up in admin. my local version is working perfectly, with no differences. it seems like the server is both updating and not updating. I don't get it. running makemigrations or migrate returns no changes. even when running on the specific app. Do I need to do something to rebot the database?
Listen it looks like caching issue with Nginx. Try clearing the cache.
I have 2 settings files: dev.py, production.py
dev.py is connected to a sqlite3 db and production is connected to the digitalocean postgres cluster.
python manage.py automatically uses dev.py (at least in my setup) so my migrations were working, but they were targeting an old copy of the sqlite.db I had on the server, or perhaps created one as it does when one doesn't exist. Either way, the live site runs on the production.py settings and this is why the changes were being made but not reflecting.
The correct way to run migrations when you have a production.py file is similar to:
python manage.py migrate --settings=<settings app>.<settings folder>.production
I'm also going to add that I found that you need to collectstatic each time you update your css stylesheet. that is a little tedious, and maybe I am missing an automation step, but considering that is the case, just to be safe, I run like this:
python manage.py collectstatic --settings=<settings app>.<settings folder>.production
It doesn't cause any issues and it works, so I use the --settings flag to be safe.
git pull
python manage.py migrate --settings=app.settings.production
python manage.py collectstatic --settings=app.settings.production
sudo systemctl restart ngix
sudo systemctl restart gunicorn
I don't know that restarting ngix is necessary except it solved something with my static files updating I believe.
I have a Django application which is present in my local system.
As I want to ignore db.sqlite3 files while transferring the repository, I have put the following in .gitignore
db.sqlite3
I push it to Github using:
git push origin master
When I do this, the updated db.sqlite3 from local system is NOT transferred to git.
As the next step, I need to transfer the files from local system to Heroku using:
git push heroku master
However, it seems that the file from Github is copied to heroku, which is weird.
Perhaps my understanding of the git push heroku master is incorrect.
Deployment method is using heroku cli
To check this weird way of working :
I added couple of entries in db.sqlite3 in my local system
I made a small change to the code in my local system
I made new entries in the Django application which is deployed to heroku
I pushed the application to Github using git push origin master and checked the timestamp on db.sqlite3 in git and it wasn't changed - I downloaded the db.sqlite3 from git and checked, the new entries that I made to the local system weren't there. This is good.
I pushed the application to Heroku using git push heroku master and found that the entries which I made in step 3 are gone and the entries in step 1 are also not reflected.
I checked my Github db.sqlite3 file and heroku db.sqlite3 file and they matched.
My requirements are as follows :
The changes to the data in db that I make in my local system should not reflect in the application deployed to heroku (I believe therefore .gitignore -> db.sqlite3)
The structural and the application changes should only go to production.
Any pointers in the right direction ?
I figured this out, like my last two queries.
I was misled by this command :
git update-index --assume-unchanged db.sqlite3
Though this link clearly tells not do to so.
For the solution, git and .gitignore works perfectly fine (stating the obvious) . It requires only one entry called db.sqlite3 and you need to ensure that you do not send db.sqlite3 to heroku. You need to have your .gitignore file updated with db.sqlite3 and use PostgreSQL in heroku.
When I did this, I received an error called django-session not setup. Basically it meant that PostgreSQL is not ready for use. You need to ensure that you are ready to follow the steps below.
Few things to remember :
When experimenting with Django, locally you use db.sqlite3 and eager to send the database file db.sqlite3 to heroku and do not make entries in .gitignore. Don't do that.
In local, use db.sqlite3 and while deploying to heroku , use PostgreSQL
Create a virtual environment using pipenv
Use pipenv install psycopg2
Use heroku run bash -a <appname>
Go to manage.py folder and run python manage.py migrate
Create your superuser python manage.py createsuperuser
This worked for me. I shall come back and update this a bit more. Three days of brain-wreck.
Finally keep searching in Github , we have a goldmine of problems and solutions already provided. Sometimes we just need to connect the dots.
I have deployed app from github repository to the heroku account of my customer as collaborator but this time I had to add some new models.
However I realized that when I deploy my changes from github heroku does not run makemigrations and migrate.
I I read some answers on stackoverflow and understood this is how it is supposed to be.
However my question is what should I do ? What is the best practise to deploy change models to heroku app. (I assume it is not deleting and recreating my app again since customer already has data there.)
(I am able to run makemigrations and migrate from the bash manually but when I have 30+ deployments it's a pain)
Check out the new feature on Heroku called "Release Phase": https://devcenter.heroku.com/articles/release-phase It will allow you to run migrations during the deployment. Just add whatever command you want to your Procfile, like this:
web: your_web_command
release: python manage.py migrate
The release command will run after your app is done building, and before it's launched.
I am working on a Django app, and I would like my Database migrations to be run when deploying on Heroku.
So far we have simply put the following command in the Procfile:
python manage.py migrate
When deploying the migrations are indeed run, but they seem to be run once for each dyno (and we use several dynos). As a consequence, data migrations (as opposed to pure schema migrations) are run several times, and data is duplicated.
Running heroku run python manage.py migrate after the deployment is not satisfactory since we want the database to be in sync with the code at all times.
What is the correct way to do this in Heroku?
Thanks.
This is my Procfile and it is working exactly as you describe:
release: python manage.py migrate
web: run-program waitress-serve --port=$PORT settings.wsgi:application
See Heroku docs on defining a release process:
https://devcenter.heroku.com/articles/release-phase#defining-a-release-command
The release command is run immediately after a release is created, but before the release is deployed to the app’s dyno formation. That means it will be run after an event that creates a new release:
An app build
A pipeline promotion
A config var change
A rollback
A release via the platform API
The app dynos will not boot on a new release until the release command finishes successfully.
If the release command exits with a non-zero exit status, or if it’s shut down by the dyno manager, the release will be discarded and will not be deployed to the app’s formation.
Be aware, however, this feature is still in beta.
Update:
When you have migrations that remove models and content types, Django requires a confirmation in the console
The following content types are stale and need to be deleted:
...
Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel:
The migrate command in your Procfile does not respond and the release command fails. In this scenario, remove the migrate line, push live, run the migrate command manually, then add it back for future deploys.
The migrate does automatically runs on Heroku, but for now you can safely do it once your dyno is deployed with heroku run python manage.py migrate.
If production, you can put your app in maintenance first with heroku maintenance:on --app=<app name here>
Setup your Procfile like in the docs
release: python manage.py migrate
web: gunicorn myproject.wsgi --log-file -
documented at https://devcenter.heroku.com/articles/release-phase#specifying-release-phase-tasks
You can create a file bin/post_compile which will run bash commands after the build.
Note that it is still considered experimental.
Read here for more buildpack info.
See here for an example
Alternatively, Heroku is working on a new Releases feature, which aims to simplify and solve this process. (Currently in Beta).
Good luck!
I have a django app that I am successfully deplying to heroku. When I dry-run the collectstatic command locally everything works fine.
python manage.py collectstatic --dry-run --noinput
....
Pretending to copy '/Users/hari/.virtualenvs/bsc2/lib/python2.7/site-packages/django/contrib/admin/static/admin/js/admin/ordering.js'
Pretending to copy '/Users/hari/.virtualenvs/bsc2/lib/python2.7/site-packages/django/contrib/admin/static/admin/js/admin/RelatedObjectLookups.js'
71 static files copied.
Despite this ..my django admin staticfiles do not get used and I get a bare-bones django admin site on heroku with Debug set to False.
If I set Debug to True I get a "rich" admin site on heroku. With Debug set to True or False "git push heroku master " command terminal output does not have anything about collecting staticfiles.
I tried the example "helloworld" application that uses gunicorn from Heroku and that did display the "collecting static" messages.I also tried inserting this code snippet into my urls.py. But that too does not help.
from django.conf import settings
if not settings.DEBUG:
urlpatterns += patterns('',
(r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.STATIC_ROOT}),
Next, I tried adding the following to my heroku config
heroku config:add DISABLE_COLLECTSTATIC=0
But that too did not show my django admin site with all the styles.
Finally I tried switching to gunicorn with my Procfile and that also did not show the admin styles. Only setting Debug=True works to show my admin styles.
I tried this with Django 1.4.2 and 1.5.1 on Heroku and neither is showing me a "normal" admin site. Is there any way out to have my admin files on heroku without going the S3 route.
command terminal output does not have anything about collecting staticfiles.
Looking at heroku-buildpack-python:bin/steps/collectstatic it seems that it tries to do a collectstatic --dry-run --noinput and pipes it's output to /dev/null before displaying the -----> Collecting static files message. This means that if there is an error there that is not present on your local box, you will never see the error on heroku : it will silently fail. (The best kind of failure ;)
have you tried running a one-off worker to test out the collectstatic command and see if it's a problem in their environment?
heroku run python manage.py collectstatic --dry-run --noinput
If this fails, it will give you an error or traceback to look into and further diagnose the issue.
Check this out Django and Static Assets. It seems to be updated recently and you can serve static files nicely using this dj-static package.
Try these three things:
Create this Heroku config variable: DJANGO_SETTINGS_MODULE with a
value of myapp.settings.prod--or as appropriate for your Heroku settings file
Use Whitenoise as described in the Heroku docs:
https://devcenter.heroku.com/articles/django-assets
Check it in and redeploy your dyno: git push heroku master
I found I was missing the first item, the DJANGO_SETTINGS_MODULE" e.g., Command line collectstatic would work but that didn't matter b/c it was an ephemeral dyno
throwing it out there, since dumb mistakes happen:
I spent much time trying to figure out why my module wasn't found on heroku when it was working fine locally, only to realize it was ignored by an entry into .slugignore