Force recreation of Heroku slug on git push - django

I'm hosting a Django app on Heroku for a while already, and so far it worked fine. I now made some changes in the requirements.txt file, but feel like they are not correctly installed on Heroku when I make the git push. The log output shows the installation of some of the requirements, but not all of them. I guess the slug does not get recreated, but rather just updated. Is there a way to force the recreation of the complete slug?

This plugin https://github.com/lstoll/heroku-repo will allow you to do that. Just install and run heroku repo:rebuild -a appname

Related

Unable to push migrations to Heroku from Django

I am a student, going through a tutorial to build a website with Next.js and Django/Python. I have zero experience with this stuff and it's been a painful process so far.
At this point in the tutorial, I have created a Heroku account and have deployed my Django project to Heroku through git and have also created the postgreSQL database.
The next step, as the dude in the video says, is to migrate the data from django into the database. I've done the whole "py manage.py makemigrations" locally and then tried to push those files to Heroku as I've read in other threads, but that doesn't work. In the tutorial, the guy just runs: heroku run python manage.py makemigrations, and it works fine. This is what happens when I try it:
I don't understand what to do...I've been Googling for the last hour or so and cannot find a solution...I appreciate anyone who can help me, I'm sure it's something stupid/simple, but I am not a programmer or developer, so I have no clue at this point...
Seems like manage.py is not commit to your git repo, try to add it:
git add manage.py
git commit
git push heroku master
Figured it out, just needed to add the path to manage.py and it worked :)

Deploying Django to production correct way to do it?

I am developing Django Wagtail application on my local machine connected to a local postgres server.
I have a test server and a production server.
However when I develop locally and try upload it there is always some issue with makemigration and migrate e.g. KeyError etc.
What are the best practices of ensuring I do not get run into these issues? What files do I need to port across?
so ill tell you what i do and what most of the companies that i worked as a django developer did and i can tell you by experience that worked pretty well.
First containerize your application, this will make your life much more easy and you will remove external influence in your code, also will get you an easy way to reproduce your environment.
Your Dockerfile should be from some python image and should do 3 basically things:
Install your requirements dependencies
Run the python manage.py migrate --noinput command
Run a http server such as gunicorn with gunicorn -c /gunicorn.py wsgi:application
You ill do the makemigration in your local machine and make sure that everything is working before commit then to the repo.
In your gunicorn.py you ill put your settings to run the app such as the number of CPU, the binding port, the folder that your app is, something like:
import os
import multiprocessing
# Chdir to specified directory before apps loading.
# https://docs.gunicorn.org/en/stable/settings.html#chdir
chdir = '/app/'
# Bind the application on localhost both on ipv6 and ipv4 interfaces.
# https://docs.gunicorn.org/en/stable/settings.html#bind
bind = '0.0.0.0:8000'
Second containerize your other stuff, for example the postgres database, the redis (for cache), a connection pooler for the database depending on the size of your application.
Its highly recommend that you have a step in the pipeline to do tests, they need to run before everything, maybe just after lint
Ok what now? now you need a way to deploy that stuff, the best for that scenario is: pull your image to github registry, and you can add a tag to that for example:
IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
docker tag $IMAGE_NAME $IMAGE_ID:staging
docker push $IMAGE_ID:staging
This can be add in a github action in the build step for example.
After having your new code in a new image inside github you just need to update the current one, this can be done by creaaing a script to do it in the server and running that script from github action, is something like:
docker pull ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
echo 'Restarting Application...'
docker stop {YOUR_CONTAINER} && docker up -d
sudo systemctl restart nginx
echo 'Cleaning old images'
sudo docker system prune -af
You can see that i create the image with a staging tag, you can create a rule in github actions for example to trigger that action when you create a new release for example, and create another action to be trigger in every new commit and build/deploy for a dev tag.
For the migration problem, the first thing is, when your application go live squash every migration to the first one (you can drop the database and all the migration then create the database and run the makemigration command again to reach this), so you can have a clean migration in the server. Never creates unnecessary relation between the tables, prefer always doing cached properties instead of add new columns, use UUID for unique ids, and try to not do breaking changes in the database, its hard but if you plan the database before is not so difficult to do.
Hit me if you have any questions. A lot of the stuff that i said can be done in a lot of other platforms such as gitlab, travis, circle ci, but i use the github action in the example because i think is more simple to picture.
EDIT:
I forgot to tell you to have a cron in your server doing backups of your databases, the migrate command ill apply the changes only after the verification but if something else break the database this can save your life.

Local -> Github working | Local -> Heroku not working

I have a Django application which is present in my local system.
As I want to ignore db.sqlite3 files while transferring the repository, I have put the following in .gitignore
db.sqlite3
I push it to Github using:
git push origin master
When I do this, the updated db.sqlite3 from local system is NOT transferred to git.
As the next step, I need to transfer the files from local system to Heroku using:
git push heroku master
However, it seems that the file from Github is copied to heroku, which is weird.
Perhaps my understanding of the git push heroku master is incorrect.
Deployment method is using heroku cli
To check this weird way of working :
I added couple of entries in db.sqlite3 in my local system
I made a small change to the code in my local system
I made new entries in the Django application which is deployed to heroku
I pushed the application to Github using git push origin master and checked the timestamp on db.sqlite3 in git and it wasn't changed - I downloaded the db.sqlite3 from git and checked, the new entries that I made to the local system weren't there. This is good.
I pushed the application to Heroku using git push heroku master and found that the entries which I made in step 3 are gone and the entries in step 1 are also not reflected.
I checked my Github db.sqlite3 file and heroku db.sqlite3 file and they matched.
My requirements are as follows :
The changes to the data in db that I make in my local system should not reflect in the application deployed to heroku (I believe therefore .gitignore -> db.sqlite3)
The structural and the application changes should only go to production.
Any pointers in the right direction ?
I figured this out, like my last two queries.
I was misled by this command :
git update-index --assume-unchanged db.sqlite3
Though this link clearly tells not do to so.
For the solution, git and .gitignore works perfectly fine (stating the obvious) . It requires only one entry called db.sqlite3 and you need to ensure that you do not send db.sqlite3 to heroku. You need to have your .gitignore file updated with db.sqlite3 and use PostgreSQL in heroku.
When I did this, I received an error called django-session not setup. Basically it meant that PostgreSQL is not ready for use. You need to ensure that you are ready to follow the steps below.
Few things to remember :
When experimenting with Django, locally you use db.sqlite3 and eager to send the database file db.sqlite3 to heroku and do not make entries in .gitignore. Don't do that.
In local, use db.sqlite3 and while deploying to heroku , use PostgreSQL
Create a virtual environment using pipenv
Use pipenv install psycopg2
Use heroku run bash -a <appname>
Go to manage.py folder and run python manage.py migrate
Create your superuser python manage.py createsuperuser
This worked for me. I shall come back and update this a bit more. Three days of brain-wreck.
Finally keep searching in Github , we have a goldmine of problems and solutions already provided. Sometimes we just need to connect the dots.

Multiple Django sites with shared codebase and DB

I have created a Django proyect with 20 sites (one different domain per site) for 20 different countries. The sites share everything: codebase, database, urls, templates, etc.
The only thing they don't share are small customizations (logo, background color of the CSS theme, language code, etc.) that I set in each of the site settings file (each site has one settings file, and all of these files import a global settings file with the common stuff). Right now, in order to run the sites in development mode I'll do:
django-admin.py runserver 8000 --settings=config.site_settings.site1
django-admin.py runserver 8001 --settings=config.site_settings.site2
...
django-admin.py runserver 8020 --settings=config.site_settings.site20
I have a couple of questions:
I've read that it is possible to create a virtual host for each site (domain) and pass it the site's settings.py file. However, I am afraid this would create one Django instance per site. Am I right?
Is there a more efficient way of doing the deployment? I've read about django-dynamicsites but I am not sure if it is the right way to go.
If I decide to deploy using Heroku, it seems that Heroku expects only one settings file per app, so I would need to have 20 apps. Is there a solution for that?
Thanks!
So, I recently did something similar, and found that the strategy below is the best option. I'm going to assume that you are familiar with git branching at this point, as well as Heroku remotes. If you aren't, you should read this first: https://devcenter.heroku.com/articles/git#multiple-remotes-and-environments
The main strategy I'm taking is to have a single codebase (a single Git repo) with:
A master branch that contains all your shared code: templates, views, URLs.
Many site branches, based on master, which contain all site-specific customizations: css, images, settings files (if they are vastly different).
The way this works is like so:
First, make sure you're on the master branch.
Second, create a new git branch for one of your domains, eg: git checkout -b somedomain.com.
Third, customize your somedomain.com branch so that it looks the way you want.
Next, deploy somedomain.com live to Heroku, by running heroku create somedomain.com --remote somedomain.com.
Now, push your somedomain.com branch code to your new Heroku application: git push somedomain.com somedomain.com:master. This will deploy your code on Heroku.
Now that you've got your somedomain.com branch deployed with its own Heroku application, you can do all normal Heroku stuff by adding --remote somedomain.com to your normal Heroku commands, eg:
heroku pg:info --remote somedomain.com
heroku addons:add memcache:5mb --remote somedomain.com
etc.
So, now you've basically got two branches: a master branch, and a somedomain.com branch.
Go back to your master branch, and make another new branch for your next domain: git checkout master; git checkout -b anotherdomain.com. Then customize it to your liking (css, site-specific stuff), and deploy the same way we did above.
Now I'm sure you can see where this is going by now. We've got one git branch for each of our custom domains, and each domain has it's own Heroku app. The benefit (obviously) is that each of these project customizations are based off the master branch, which means that you can easily make updates to all sites at once.
Let's say you update one of your views in your master branch--how can you deploy it to all your custom sites at once? Easily!
Just run:
git checkout somedomain.com
git merge master
git push somedomain.com somedomain.com:master # deploy the changes
And repeat for each of your domains. In my environment, I wrote a script that does this, but it's easy enough to do manually if you'd like.
Anyhow, hopefully that helps.

Google App Engine Development and Production Environment Setup

Here is my current setup:
GitHub repository, a branch for dev.
myappdev.appspot.com (not real url)
myapp.appspot.com (not real url)
App written on GAE Python 2.7, using django-nonrel
Development is performed on a local dev server. When I'm ready to release to dev, I increment the version, commit, and run "manage.py upload" to the myappdev.appspot.com
Once testing is satisfactory, I merge the changes from dev to main repo. I then run "manage.py upload" to upload the main repo code to the myapp.appspot.com domain.
Is this setup good? Here are a few issues I've run into.
1) I'm new to git, so sometimes I forget to add files, and the commit doesn't notify me. So I deploy code to dev that works, but does not match what is in the dev branch. (This is bad practice).
2) The datastore file in the git repo causes issues. Merging binary files? Is it ok to migrate this file between local machines, or will it get messed up?
3) Should I be using "manage.py upload" for each release to the dev or prod environment, or is there a better way to do this? Heroku looks like it can pull right from GitHub. The way I'm doing it now seems like there is too much room for human error.
Any overall suggestions on how to improve my setup?
Thanks!
I'm on a pretty similar setup, though I'm still runing on py2.5, django-nonrel.
1) I usually use 'git status' or 'git gui' to see if I forgot to check in files.
2) I personally don't check in my datastore. Are you familiar with .gitignore? It's a text file in which you list files for git to ignore when you run 'git status' and other functions. I put in .gaedata as well as .pyc and backup files.
To manage the database I use "python manage.py dumpdata > file" which dumps the database to a json encoded file. Then I can reload it using "python manage.py loaddata".
3) I don't know of any deploy from git. You can probably write a little python script to check whether git is up to date before you deploy. Personally though, I deploy stuff to test to make sure it's working, before I check it in.