I am learning about Heroku's app.json and app-setups features. I added an app.json to my repo's root directory and configured it to setup add-ons, env vars, etc.
Now I am trying to figure out the steps for someone who has cloned my repo locally from GitHub and made some edits, to deploy it to Heroku, and for the app.json to be processed.
Heroku's help article gives an example, but I feel maybe I am missing a simpler way, because (1) it uses cURL, which many of my users might not have installed, (2) it relies on the repo being at a publicly accessible URL, rather than locally, and (3) it's more verbose than typical Heroku commands.
Is there a simpler way?
It's been a while since this question was asked, but you can look at Heroku Button docs. It is definitely simpler. It does not use curl and can be used with private repositories.
Related
I have successfully built several web sites hosted on an Nginx server using Django, uWSGI and virtualenv.
I had never used version control but now I am starting to use Git.
I understand how to create, commit and push branches.
My question is: how to make different Git branches visible at the web address of the site I'm working on?
Do I change the Nginx config file to point somewhere different?
I just updated the dev branch of my project, and of course the site does not reflect the changes.
How can I tell the server to serve the dev branch or the master branch of the project?
I would prefer to avoid a complicated system with different subdomains for different branches — I really just want the simplest thing that will work.
[update] I have found lots of pages that explain complex ways to set up staging servers etc., but I really just want to understand what is going on... there's a giant conceptual hole in my understanding about how the server interacts with a local Git project.
Right now, the Nginx config and the uWSGI config point to a folder like:
/var/www/sitefiles
That is the Django folder (inside it is sitefiles/settings.py etc.).
It is in that folder that I did git init, some commits, branching & pushes.
Does using Git mean that the Nginx and uWSGI config's should point elsewhere?
Its pretty simple goto the path of the project where git is installed and checkout to required banch and touch the wsgi file
git checkout dev
touch project/wsgi.py
or to roll back to master branch
git checkout master
touch project/wsgi.py
A customer of mine has a Heroku Python/Django application that they have asked me to take a look at and I am trying to understand the process of getting it running on my local Windows 7 laptop. I have been searching the net without any success. Does anyone have a suggestions or guides on how to do this.
Please have a look here : Collaborating with Others
They should add you as a collaborator so you can git clone the project files. They can do it via the heroku toolbelt installed on their computer (in command line) or via the Dashboard heroku
I am quite new to programming, and all of my development has been on my local runserver using textmate and terminal. I have written a small app with a few hundred and I'd like to push it to an EC2 server. My only knowledge in terms of 'developing tools' is Django's local runserver, TextMate and Terminal.
What tools or programs should I look into learning to be able to have an effective workflow?Should I be using some sort of IDE over TextMate for this? My main concerns are being able to develop on my local runserver and then painlessly push that to my production server.
As #isbadawi said, use Fabric. It's better than just using the terminal because you can automate things. As far as deployments go, you can simplify it down to: fab -H your.host.com deploy. Inside the file you write commands, a simple one might go:
Cause the server to download the most recent code from SCM
Update the database (syncdb / migrations / what have you)
Cause apache or whatever you're using to reload the configuration
As far as some more general tips go:
If you're using WSGI, put it under source control
Same goes with local settings files, have your deploy script rename them to local_settings.py as part of the build
If you're really going for painless, look into Django hosting services like Gondor or Ep.io. Those will have clients that you can just deploy to pseudo-painlessly, although you will have to change some settings on your side to match theirs better, as there are many many ways to deploy a Django app.
Update: Ep.io is no longer in business as a hosting service. My new go-to is Heroku.
Update 2: I used to link local_settings.py in deployments, but now I'm leaning towards using the DJANGO_SETTINGS_MODULE config variable. See rdegge's "django-skel" settings for a good way to do this.
A DVCS such as git or Mercurial will allow you to develop and test locally, and then push the changes to a remote system for staging and production.
I'm new to django and my very first project is my blog. I wonder how django developers who use pydev normally synchronize with remote hosting server, updating their sites?
I also would like to know, how do you combine usage of git with a django project? Should I just make a repository for the entire project?
At my company we've got an entire git repository for each project, including the Django sources that are put in the PYTHONPATH for each project, making Django versions project dependant. The folder structure is something like:
/.git
/projectname/app1
/projectname/app2
/projectname/manage.py
/django-lib/django/...
As django-lib is not a Python module, we include both / and /django-lib in the PYTHONPATH. If your project is becoming large, you might want to consider using git submodules on your apps.
We've also setup several servers to support the developers. There's a testing server running a central testing database and a setup including Apache with WSGI to make testing on a real server possible, which sometimes is a bit different then the local manage.py the developers use before committing their changes.
The testing server is updated with the master branch of our git repository. We've made several scripts to allow all developers to do this without letting them login to the server via SSH, but that is just during pre-release. After release, that server will become our staging server, and we'll remove all scripts from it to make it just like our production server.
Every developer has setup their local project to make sure that it communicates with the central testing database, containing several test data. I myself push my changes from the commandline, but you could also use EGit for this.
When we've got a release, we put it in a separate branch, called 'release' (obviously) and the production server will pull only from that branch. This is done via SSH, but I don't really know how your server setup looks like, so I guess that that last step is entirely up to you.
I hope that this has helped you a bit. I won't say that this is the best workflow possible, but it works for us and you should figure out what works for you.
Most experienced Django developers use pip(or distribute) and virtualenv deal with all the python packages you might need for your Django projects (including Django itself).
Personally, all I keep in my projects git repository is a bunch of segregated requirements lists generated by pip :
. ~/Dev/environs/$PROJECT_NAME/bin/activate
pip freeze > ./docs/requirements/main.list
I'm fairly sure most django developers would be familiar with Fabric, which I use for :
streamlining local interaction with git and,
pushing to our central repository,
pulling from our production or test server
touching the wsgi on the relevant server
and pretty much any other kind of task you might find yourself using ssh terminal session for.
For those cases where I need to make changes to someone elses django application in order to make it work or suit our purposes, I :
fork it on github,
clone from my forked repo
make the changes
push it up to my own repo
and provide merge requests to the original repo owner
This way, i have a repo where i can use pip requirement lists to keep pulling from until the original application owner gets their own repo updated.
I have a multi-step deployment system setup, where I develop locally, have a staging app with a copy of the production db, and then the production app. I use SVN for version control.
When deploying my production app I have been just moving the urls.py and settings.py files up a directory, deleting my django app directory with rm -rf command and then doing an svn export from the repository which creates a new django app directory with my updated code. I then move my urls.py and settings.py files back into place and everything works great.
My new problem is that I am now storing user uploads in a folder inside of my django app, so I can't just remove the whole app dir anymore or I would loose all of my users files.
What do you think my best approach is now? Would svn export --force work, since it should just be overwriting all of my changed files? Should I take an entirely new approach? I am open to advice?
You may want to watch this presentation by Jacob. It can help you improve your deployment process.
I use Bitbucket as my repo and I can simply perform push on my Dev box and run pull/update on Stage/Prod box. Actually I don't run them manually, I use fabric to do them for me :).
Your could use rsync or something similar to backup your uploaded files and use this backup when you deploy your project.
For deployment you could try to use buildout:
http://www.buildout.org/
http://pypi.python.org/pypi/djangorecipe
http://jacobian.org/writing/django-apps-with-buildout/
For other deployment methods see this question:
Django deployment tools
You can move your files to S3 servers (http://aws.amazon.com/s3/), so you will not ever have to care about moving them with your project.