Django deploying on lighttpd automatization - django

I have few simple django-based sites and I their number increasing all the time. Every time I deploy the site I need to:
Manually create bash-script that start Django FastCGI server.
Adding it to etc/init.d to run after server reboot.
Creating separate config for Lighttpd to work with FastCGI server and serving static files.
I know how to do it, but I'd like to automate this task if possible.
My dream setup process could look like this:
I have a folder somewhere in my /var/ directory. For example: /var/django/
I clone one of my projects to the subdirectory of this directory.
After that happening one of the following: Some software automatically detects folder creation, and creates all necessary configs and then restart Lighttpd. OR I manually run some kind of script in my new folder to do it.
I tried to look for existing tools for such automation or something similar in the internet, but couldn't find one.
So I'd like to ask is there tools like this out there? Maybe not exactly to install Django apps, but to this kind of process automation in general. Or everybody just writes their own bash script to do such things?

have you had a look at fabric and puppet?

I think fabric will do the job. I've just started reading through the docs, seems very simple to get started on. Also it has nice Python-ic way of doing things locally and on remote servers.

Related

Best tool for simple automated deployment with Django

For a Django project, I'm looking for a tool that would:
update the code on my server from a given branch in a remote repository (example: a master branch from Bitbucket)
run basic django command (migrate, collectstatic, etc...)
restart the project
notify me that all went ok (on Slack for instance)
I've seen many possible ways of doing this (Ansible, DeployBot, Pipelines, etc...), but I was wondering if there is a tool to recommend for a simple app?
Generally, on my Django project I am using Fabric (http://www.fabfile.org) for all activities mentioned by you. So deploy from particular branch, Django commands (eg. static collect), server restart etc.

How to work with a local development server and deploy to a production server in django?

I want to work locally on my django(1.7) project and regularly deploy updates to a production server. How would you do this? I have not found anything about this in the docs. I am confused about that because it seems like many people would want to do that and there should be some kind of standard solution to this. Or am I getting the whole workflow wrong?
I should note that I'm not expecting a step-by-step guide. I am just trying to understand the concept.
Assuming you already have your deployment server setup, and all you need to do is push code to your server, then you can just use git as a form of deployment.
Digital Ocean has a good tutorial at this link https://www.digitalocean.com/community/tutorials/how-to-set-up-automatic-deployment-with-git-with-a-vps
Push sources to a git repository from a dev machine.
pull sources on a production server. Restart uwsgi/whatever.
There is no standard way of doing this, so no, it cannot be included with Django or be thoroughly described in the docs.
If you're using a PaaS how you deploy depends on the PaaS. Ditto for a container like docker, you must follow the rules of that particular container.
If you're old-school and can ssh into a server you can rsync a snapshot of the code to the correct place after everything else is taken care of: database, ports, webserver setup etc. That's what I do, and I control stuff with bash scripts utilizing a makefile.
REMOETHOST=user#yourbox
REMOTEPATH=yourpath
REMOTE=$REMOTEHOST:$REMOTEPATH
make rsync REMOTE_URI=$REMOTE
ssh $REMOTEHOST make -C $REMOTEPATH deploy
My "deploy"-action is a monster but might be as easy as something that touches the wsgi-file used in order to reload the site. My medium complex ones cleans out stale files, run collectstatic and then reloads the site. The really complex ones creates a timestamped virtualenv, cloned database and remote code tree, a new server-setup that points to this, runs connection tests on the remote and if they succeed, switches the main site to point to the new versioned site, then emails me the version that is now in production, with the git hash and timestamp.
Lots of good solutions. Heroku has a good tutorial: https://devcenter.heroku.com/articles/getting-started-with-django
Check out a general guide for deploying to multiple PaaS providers here: http://www.paascheatsheet.com

Django Compressor on a multi-server deployment

I've been fortunate enough to discover django_compressor and implemented it within our stack, which deploys to many servers (Currently 6, but growing as we deploy smaller virtual machines.)
Now this is all fine and dandy if you're using django_compressor at its finest. Compressing raw CSS/JS code
However, say now I want introduce some type of pre-compiler. Let's say for this example it is LESS (css). The thought process for this is fairly simple:
Install node, npm, and the less package onto the server.
Add less to your precompilers!
COMPRESS_PRECOMPILERS = ( ('text/less', 'lessc {infile} {outfile}'), )
Now you deploy, and your server compiles the less file. Everything is fantastic!
Now let's add 8 more servers to that and you have to install node, npm, and less on each server?
This is where something doesn't seem right, and I feel like I'm missing something. I believe the Django community has run into this problem before.
My thoughts thus far have been:
Use a post-commit hook to compile the CSS on the developers machine. This means that via django_compressor, we link to the compiled static file in the HTML, and our repository contains both the compiled and non-compiled versions. My only downside to this is it ends up not using half of the benefits of django_compressor and may be tedious for developers?
Suck it up and make node, npm, and less part of the server stack.
Update
I did some additional looking around and it seems that using the COMPRESS_OFFLINE flag (or just --force) with the management command will produce an offline manifest file that does what I need (only tested locally). So setting this up with a pre-deploy hook likes to be the answer.
Of course, still open to other ideas :-)
Coupled with the tips in the comments about COMPRESS_OFFLINE, you could look at django-staticfiles' storage stuff. You can host the static files on amazon s3, for instance, so hosting it all on one static-hosting server and using that from all your servers could also be a nice solution. You wouldn't need to do anything with the static (and compressed) files on the individual servers.
Alternative solution regarding the multiple servers: I've made a custom fabric (docs.fabfile.org) script that installs/configures stuff on our servers. I've only recently started using coffeescript and less, but those two are definitively ending up in my fabfile. That solves the installation problem for me.
(Alternatives to a fabfile are things like a custom debian package with standard dependencies. Or chef or puppet or something similar.)
you can use puppet for the task

How Are Experienced Web Developers Deploying Django Into Production on EC2?

I have never actually worked for a company which is deploying a Django App (with a large user base), and am curious about what is the best way to do this.
Right now I am hosting a Django App on EC2. The code for the app is sitting in my github account. I have nginx serving static content, and behind it a single apache server running django + mod_wsgi.
I am trying to figure out what the best practice is for "continuous deployment". Right now, after I have added additional functionality I do the following on EC2:
1) git reset HEAD --hard
2) git pull
3) restart apache
4) restart nginx
I have custom logic in my settings.py file so that if I am running on EC2, debug gets set to False, and my databases switch from sqlite3 (development) to mysql (production).
This seems to be working for me now, but I am wondering what is wrong with this process and how could I improve it.
Thanks
I've worked with systems that use Fabric to deploy to multiple servers
I'm the former lead developer at The Texas Tribune, which is 100% Django. We deployed to EC2 using RightScale. I didn't personally write the deployment scripts, but it allowed us to get new instances into the rotation very, very quickly and scales on-demand. it's not cheap, but was worth every penny in my opinion.
I'd agree with John and say that Fabric is the tool to do this sort of thing comfortably. You probably don't want to configure git to automatically deploy with a post commit hook, but you might want to configure a fabric command to run your test suite locally, and then push to production if it passes.
Many people run separate dev and production settings files, rather than having custom logic in there to detect if it's in a production environment. You can inherit from a unified file, and then override the bits that are different between dev and production. Then you start the server using the production file, rather than relying on a single unified settings.py.
If you're just using apache to host the application, you might benefit from a lighter weight solution. Using fastcgi with nginx would allow you to do away with the overhead of apache entirely. There's also a wsgi module for nginx, but I don't know if it's production ready at this point.
There is one more good way how to manage this. For ubuntu/debian amis it is good to manager versions and do deployemnts by packeging your application into .deb

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.