How do I update Django on Openshift? - django

I'm learning to deploy Django on Openshift.
Right now I have a python-2.7 cartridge up and running with Django 1.6
The git repo cloned in the cartridge is,
git://github.com/rancavil/django-openshift-quickstart.git (Github)
How can I update the Django version of a running webapp?
I've looked at this question that just explain about updating a cartridge, while I'm asking about updating the packages inside a cartridge while keeping the cartridge same as python-2.7.

The easiest way to achieve this is to change the setup dependencies (install_requires parameter for setup ()) in setup.py. Instead of
packages = ['Django<=1.6',]
as in the cartridge default you could write
packages = ['Django>=1.7,<1.8',]
to get the latest version of Django 1.7. More details of how to specify values can be found in the Python Packaging User Guide.
With your next git push this file will be executed and the packages get updated, if required.

Warnings!
make sure new version is ok for your app. Django 1.7 brought DB migrations feature, which might break your compatibility. (We had some issues as we used South before that.)
before applying upgrade backup the app instance snapshot (takes time)
Actually git push takes some time while your application will be down.
If you want to shorten the time, you can follow this approach:
ssh into your app openshift server
pip install --upgrade Django==<new version>
That will upgrade django immediately. However the running web process still keeps the older version. So you need to restart python cartridge.
From you local command line:
rhc cartridge restart -a <your app> -c python
Now its running with the new django and the downtime is minimal.
Make sure to update setup.py as mentioned in the other answer in order to be aligned with the next git push.

Related

How to run the packaged application from django's polls tutorial?

https://docs.djangoproject.com/en/3.0/intro/reusable-apps/
After it's packaged, the documentation says that 'With luck, your Django project should now work correctly again. Run the server again to confirm this.'. In this tutorial, to start the server, do this:
python manage.py runserver
But after this is packaged and installed to the user directory, it has been moved out of the project, and 'manage.py' isn't available. How to run the server and test the newly installed package?
This may be a silly question, but the doc does lack a key sentence to tell how to start the server to run the installed package.

Upgrading Redmine from 3.0.7 to 3.2.0

I am trying to upgrade my Redmine 3.0.7 that was installed from the oneclick install to the newest stable version, 3.2.0. However, when I try to run svn update, it says it is updated, but doesn't show as updated in the info on the site. I tried to follow the information here:
You can checkout the latest stable source with one of the following commands:
Subversion
svn co https://svn.redmine.org/redmine/branches/3.2-stable redmine-3.2
It will create a directory named redmine-3.2 and you'll be able to update your Redmine copy using svn update in this directory.
The information from the info page on the admin section of my redmine:
Environment:
Redmine version 3.0.7.stable.15164
Ruby version 2.0.0-p643 (2015-02-25) [x86_64-linux]
Rails version 4.2.3
Environment production
Database adapter Mysql2
SCM:
Subversion 1.8.8
Filesystem
Redmine plugins:
no plugin installed
But it didn't work. Any help would be much appreciated.
In order to upgrade to a newer version of Redmine, specifically 3.2, you will need to switch to the 3.2-stable SVN branch and then perform the upgrade.
First off, I would recommend taking a snapshot of your Droplet so that you have a working state that you can restore in case anything goes wrong with the upgrade. If you can't power off your Droplet to take a snapshot, you can back up the files and settings manually. All uploaded files should be stored in /srv/redmine/files. The database can be backed up by running the following command:
mysqldump -u root redmine | gzip > ~/redmine_db_backup.sql.gz
Then, switch to the newer SVN branch:
cd /srv/redmine
svn switch ^/branches/3.2-stable
Make sure all the requires gems are installed and up-to-date:
bundle update
Next, you'll want to upgrade the database as well so that any changes in the database structure are applied to your existing database:
bundle exec rake db:migrate RAILS_ENV=production
bundle exec rake redmine:plugins:migrate RAILS_ENV=production
Finally, clear the cache and restart Passenger. This will log out all users.
bundle exec rake tmp:cache:clear tmp:sessions:clear RAILS_ENV=production
touch tmp/restart.txt
You might also want to check out the Admin -> Roles & permissions page for any new permissions.
Let me know if you have any issues. I've just tested it on a Droplet and everything went fine - so I'm hoping that everything will go smoothly for you as well.

Django Analytical Google Analytics Display Advertising working on development, staging but not production

Running Django 1.6 and Analytical 0.16.0
I have the following in my settings.py
GOOGLE_ANALYTICS_PROPERTY_ID = env_var('GOOGLE_ANALYTICS_PROPERTY_ID')
GOOGLE_ANALYTICS_DISPLAY_ADVERTISING = True
and the google analytics code shows up as expected when I run the site locally and on the staging server (ie. running the doubleclick dc.js analytics script), however when running on production it still shows the default Google Analytics ga.js script.
It isn't affected by DEBUG being on or off and as I can tell the settings and env are the same on production and staging servers (both runnning on Heroku). Can anyone offer an explanation of why this might be the case?
edit: SOLVED. Turns out I was still running Analytical 0.15.0 on the production server. I had wrongly assumed that heroku automatically installed the latest version if the version wasn't specified in the pip requirements.
Check that Heroku is running the same versions of each program:
heroku pip freeze
It turns out it was still running an old version of django-analytical as the version number wasn't specified in the pip requirements file. Heroku won't upgrade an existing program unless explicitly specified. Changing the requirements.txt to the following solved it.
django-analytical==0.16.0

Creating an app on Heroku with Django and NPM

I'm writing a Django app that includes some CoffeeScript in it. To allow for this I'm using django-compressor which compiles the CoffeeScript to JS before the app is launched. django-compressor requires that NPM is installed on the machine to compile the CoffeeScript.
Now I want to deploy this app on Heroku. I can't put npm in my requirements.txt so I am wondering how I can get npm on the Heroku server?
If you want to avoid maintaining a custom buildpack, you can use the multi buildpack.
Using the multi buildpack is super simple:
Run heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
Create a .buildpacks file in the root of your repository with two lines:
https://github.com/heroku/heroku-buildpack-nodejs.git
https://github.com/heroku/heroku-buildpack-python.git
Create a package.json file with your npm dependencies.
Run npm install
Note: The multi buildpack is a much nicer way to accomplish this these days :)
I've created a fork of the official Python heroku buildpack that allows an optional npm_requirements.txt for installing such dependencies.
I am now using coffeescript and less-css with django-compressor on heroku :)
https://github.com/jiaaro/heroku-buildpack-django
Edit: To switch to my buildback from the standard buildpack:
use the heroku command line app to set the BUILDPACK_URL environment variable:
heroku config:add BUILDPACK_URL=git://github.com/jiaaro/heroku-buildpack-django.git
You can create your own buildpack, that mix nodejs buildbpack and python buildpack. Or compile your CoffeeScript on your machine and put it on S3.
I found this question in Google while solving the same problem for myself.
I merged two official buildpacks (python and nodejs), so now one can have Django project with standard npm-description file package.json by running this command:
heroku config:add BUILDPACK_URL=https://github.com/podshumok/heroku-buildpack-python
This solution differs from Jiaaro's one in the following:
it is based on the newer (dec 12) versions of buildpacks (for example, it runs collectstatic on deployment)
you need correct package.json file (at least name and version of your product should be specified in this file)
npm dependencies should be listed in package.json
#Jiaaro 's solution didn't work for me... Causes some weird error... /:
File "almalinks/manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
Was too tired to deal with it, so I looked around and I found this nifty resource:
- The heroku-django cookbook
They explain how you can add your own scripts that hook into heroku's default buildpacks.
Worked like a charm. :)
Things have changed in Heroku land
There is no need for multi build packs, .builpack files, or custom build packs. Simply add the required official heroku build packs to your heroku app and they will execute in the order entered. Use the index option to reorder them as required.
heroku buildpacks:add --index 1 heroku/nodejs -a your_app_name
There is also no need for, gunt tasks, apps like django-bower, or other specialized tools that take up server resources and slow build time.
You can check out my tutorial on how to seamlessly integrate Django + Bower + Heroku here.

django release management (staging, testing and production) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've been into django for some time now and most my focus has been around learning how to develop and run applications locally on my development machine. Now I am trying to learn best practices for deployment and release management.
I am trying now to setup my code in github and then somehow setup a production and staging environment where I can push the changes with minimum impact.
Are there best practices out there I could follow? and how do you create an agile environment whereby you can commit your code into a staging environment where customers can view the work as you do it.
I would recommend checking out the process as documented in lincoln loop. You can go straight to their github repo at django-startproject. Basically, the workflow that django-startproject creates segregates dev, test, and production. You run the dev server with
manage.py runserver 0.0.0.0:8000 --settings=<Project>.conf.dev.settings
and you execute tests with
manage.py test --settings=<Project>.conf.test.settings
django-startproject will install a requirements file for pip that will allow you to specify and easily install the necessary dependencies. I strongly recommend using virtualenv in combination with django-startproject. A good tutorial on using virtualenv with Django can be found here.
django-startproject also includes a barebones fabric.py script that helps deployment on remote/cloud servers.
Of course all of the above will be under source code control with svn/hg/git/whatever.
So the deployment process on a bare-bones ubuntu/debian server would be:
sudo apt-get install python-setuptools python-dev build-essential
sudo easy_install -U pip
sudo pip install -U virtualenv
mkdir -p <path>/python-environments
cd <path>/python-environments
# Create the virtual env
virtualenv --no-site-packages --distribute <my project dir>
cd <my project dir>
git clone https://github.com/<my project>.git
cd <my project>
# Install dependencies
pip install -r requirements.pip
# Run tests, setup apache, etc.
From then on, you can use fabric to deploy changes to your production server.
setup a production and staging environment where I can push the changes with minimum impact.
This is easy in some cases and hard in some cases.
When you change the database design in Django, you must redo syncdb, and you may have to extract and reload existing data when you do this. This is hard. Some folks use south. We did it by hand because south handles most of the cases, not all of them.
When you release new code (no database change) the upgrades are quite trivial.
When Apache starts, mod_wsgi starts.
When mod_wsgi starts, it reads the .wsgi files to determine what to do.
The .wsgi file -- essentially -- defines the Django request-reply handling loop that will invoke your application.
When a .wsgi file's timestamp changes, mod_wsgi rereads the file. This will, in effect, restart your application.
how do you create an agile environment whereby you can commit your code into a staging environment where customers can view the work as you do it.
This is pretty easy.
Put your application code into /opt/myapp/myapp-x.y/ directory structures. The myapp-x.y name matches a git tag name.
Staging is simply a Django configuration using the next release of the app. /opt/myapp/myapp-2.3/. Production is the current release. /opt/myapp/myapp-2.2/. Yes, there are older releases.
Define your Apache configuration to have two (or more) "locations", using the Apache <Location> directive. One location is "production" with ordinary paths. The other is "staging" with other paths. Or use virtual host. Or any other Apache thing that makes you happy.
Now you have both versions running in parallel "locations".
You can tweak staging by (perhaps) redoing the database, and changing the .wsgi file to point at a new release of your application.
You can tweak production by (perhaps) redoing the database, and changing the .wsgi file to point at the new release of your application.
When you have something releasable, tag it. Fix your Python setup.py and setup.cfg to deploy to the next /opt/myapp/myapp-tag directory.