Is there an ideal way to move from Staging to Production for Coldfusion code? - coldfusion

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.

Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.

You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.

Related

Selecting a git workflow for my situation

I'm new to git. I've read the well-written intro book. But gee, it's still not a trivial topic. I've been bumbling around, experiencing various problems. I realized it might be because I'm unaware of workflow, and specifically, "what are the best practices for doing what I'm trying to do?"
I started out developing a django project on my win7 with Pycharm. Great way to get the initial 95% written.
But then I need to deploy it to my production machine at PythonAnywhere.
So I created a private Github repository, pushed my win7 codebase to github.
Then in pythonAnywhere, I cloned the github repository.
For now, no others work on this project. It will not be released to the public.
Now that the server is running on PythonAnywhere, I still need to tweak settings, which is best done on the PythonAnywhere codebase side. But there are other improvements (new pages, or views) that I'd rather do inside Pycharm IDE on my win7 than in vim on python anywhere.
So I've been kind of clumsily pushing and fetching these changes. It's been kind of ham-handed, and I've managed to lose some minor changes through ignorance.
So I'm wondering if anyone can point to a relatively simple workflow that would handle the various tasks I mentioned:
1) improving functionality of the site (best done in Pycharm IDE)
2) production server issues and tweaks (best done on PythonAnywhere)
3) keeping everythign safely backed-up on Github
The other issue is that I have another django app that I want to build. It's easiest to temporarily hang it off the django project I've already built. But I'd prefer to keep it in its own repository.
So I have Original_Project, Original_App stored in Original_Repository
I want to make new_app, and have it, for the time being, run in Original_Project, but I want to version control it in New_Repository.
I think/hope that I could put a .gitignore in the Original_Repository, saying ignore the new_app/ Then I git init new_app/ as its own repository. Is that sound or mad?
You should avoid editing your code on the production server as much as possible, and never commit from the production server. If you end up having to tweaks things on the server (you shouldn't but well, shit happens and sometimes it's indeed easier to first get the code back to work on the server), then once it's working manually report your edits to your local repo, clear up the changes on the server and deploy the fixed code again. Here the github repo should be considered as the "master" repository for deployments, ie you work on your local repo, push to github, and on the server pull from github. This make sure you keep the github repo in sync.
wrt/ the "improving functionality" (aka "features") vs "server issues and tweaks" (aka "hotfixes"), git flow is a (mostly) sane workflow IMHO but that's a bit opinion-based here (some dislike it and have sensible arguments too).
Finally if you want to factor out one of your apps, the best is to have it in it's own (github) repo with all the proper python packaging stuff and make it a requirement of your main project. On your local dev environment you install it as an editable package, and for the production setup you install it as normal package pinned to the last stable version. Note that in both cases I assume you're using virtualenvs (and if you dont, well that's the very first issue you should address).
Update:
What are the downsides of of editing directly on the production server and committing from the production server?
Well quite simply a production server is not the place for coding - "production" means that you have users trying to do something with your website and they don't want to have the site breaking on them, their data lost or whatever because you are "tweaking" things. You should only deploy stable, well tested code on production, and the one and only one case where editing anything on the server might be a last resort option is when it's already broken and you want to get it back online asap whatever it takes (case of "first make it work, then make it clean").
Point is, I'm a professional developer working on projects that are business criticals and a broken site is not an option, so I'm very strict on this - but even if it's a hobby project, your users deserve some respect (at least if you expect to see them back).
A proper production chain actually involves at least three environments: your local dev environment, a staging server (which should closely mirror the production server - system, system package versions, configurations etc etc) to test out / showcase / eventually do minor config tweak, and the production server which should only ever see stable tested code.
I have always struggled with git, knowing it well enough to get thigs working, but never being sure I am doing thing well.
I would suggest installing git flow (it is probably available in your package manager if you are on Linux). Its a set of extensions that simplify a standard git worklfow. Since using it, this has pretty much been all the documentation I have needed.
https://danielkummer.github.io/git-flow-cheatsheet/

Pain of configuring various environments in development and production (Rails 4 application)

As per best practices, my development team does not store the application config file in a repo for security reasons (we use a config/application.yml file to store configs). However, when we actually develop and deploy, this causes some problems:
A developer needs to add a new external URL that is different depending on what environment the application is running in. Since there is no config file in the repo, he cannot update a single file that gets synced when another developer pulls the code. To make this happen, he updates his local config/application.yml file and then each other developer updates their local file, and then we have to add the new ENV variable to the server's config/application.yml. Has to be a better solution.
If we stored the config/application.yml file in the repo and shared it among everyone and the servers, this solves the problem of sharing/updating global configs, BUT it opens up the possibility that a developer may accidentally start their local application in production mode and touch live data or spam real users with test emails (has happened which is why it's a concern).
Is there a standard best practice for solving these types of problems? It seems I either sacrifice productivity for security but can't really have both.
I've been thinking about creating a config/development.yml file in the repo that all developers share, which stores all environments EXCEPT production. That way they can share config/ENV items for development and sync them up. But in production, I would have a config/production.yml file that ONLY lives on the servers.
If the application is started in anything except production environment, it loads the development.yml file. If it is started in production, it loads the production.yml file. But since the production.yml file does NOT live in the repo (only on the servers), there's no chance that a developer can accidentally touch live data or spam real users, etc...
Have any professional developers tried a scheme like this? I've done a lot of googling but really haven't found a satisfactory solution.
Check out the RailsConfig gem. This allows you do to exactly what you stated, but with the ease of a gem. This also allows you and your dev team to have local yaml files that override settings.
config/settings.yml
config/settings/#{environment}.yml
config/environments/#{environment}.yml
config/settings.local.yml
config/settings/#{environment}.local.yml
config/environments/#{environment}.local.yml
You would then just have config/settings/production.yml within your .gitignore so that it will not be checked into source control.

Coldfusion continuous Integration

let me begin by saying I 'm a coldfusion newbie.
I 'm trying to research if its possible to do the following and what would be the best approach to achieve it.
Whenever a developer checks in code into SVN, I would like to do a get all the new changes/files and do an auto build to check if the code can be deployed successfully to production server. I guess there are two parts to it, one syntax checking and second integration test(if functionality is working as expected). For the later part some unit test tools would have to be used.
Can someone comment on their experience doing something similar for coldfusion.
Sorry for being a bit vague...I know its a very open-ended question but any feedback would be appreciated.
Thanks
There's a project called "Cloudy With A Chance of Tests" that purports to do what you require. In particular it brings together a number of other CFML code analysis projects (VarScope & QueryParam) to check code, as well as unit testing. I am not currently using it myself but did have a look at it some time ago (more than 12 months) and it appeared to be quite good.
https://github.com/mhenke/Cloudy-With-A-Chance-Of-Tests
Personally I run MXUnit tests in Jenkins using the instructions from the MXUnit site - available here:
http://wiki.mxunit.org/display/default/Continuous+Integration+--+Running+tests+with+Jenkins
Essentially this is set up as an ant task in Jenkins, which executes the MXUnit tests and reports back the results.
We're not doing fully continuos integration, but we have a process which automates some of the drudgery of our builds:
replace the site's application.cf(m|c) with one that tells users that the app is being deployed (we had QA staff raising defects that were due to re-deployments)
read a database manifest XML which lists all SQL scripts which make up the current release. We concatenate the scripts into a single upgrade script, suitable for shipping
execute the SQL script against the server's DB, noting any errors. The concatenation process also adds a line of SQL after each imported script that white to a runlog table, so we can see what ran, how long it took and which build it was associated with. If you're looking to replicate this step, take a look at Liquibase
deploy the latest code
make an http call to a ?reset=true type URL to tell the app to re-initialize
execute any tests
The build is requested manually through the build servers we have, but you click a button, make tea and it's done.
We've just extended the above to cope with multiple servers in a cluster and it ticks along nicely. I think the above suggestion of using the Jenkins SVN plugin to automate the process sounds like the way to go.

Deployment of files other than source code

I am starting to prepare a roadmap for our release process. We are at present using tortoise svn and ant for building source. I am considering implementing continuous integration and would like to know right direction for the choices below:
Firstly, the present process is such that a developer would work on a file, commits that file directly to repo. Others would run the tortoise update command to pull in the required changes. The same process is followed on the build server where in would update the source code, build and then deploy to qa and production servers. However, this process lacks control of repo since during an update, unwanted code is also pulled in case two developers worked on the same file fixing two different issues. One approved by qa and other rejected. How can i overcome this scenario.
Secondly, apart from source we have a bunch of other files such as xml files, css,js etc . How do i automate deployment of these files? I have configured cruisecontrol on my local machine and it works fine when it comes to executing a build but now sure how to handle other files since updating those files in production seems risky and error prone. Any suggestion in this would be really helpful.
You could try integrating PowerShell with CruiseControl, our team has CC fire off the build process and then PowerShell to copy the resulting project files (code and others) to production or a test site or wherever.
I'd suggest to deal with the lack of repository control that you create a candidate Branch off your Trunk and designate that as your Integration code. Once it's settled and necessary changes have been committed or pulled, promote it to Regression for further testing. Then once that testing is successful, promote it to Production.
In this process your developers wouldn't be committing to Production directly, but instead through an iterative process a new production repository will result, whose changes can then be reintegrated into Trunk so the process can start anew for the next release.

How do I run one version of a web app while developing the next version?

I just finished a Django app that I want to get some outside user feedback on. I'd like to launch one version and then fork a private version so I can incorporate feedback and add more features. I'm planning to do lots of small iterations of this process. I'm new to web development; how do websites typically do this? Is it simply a matter of copying my Django project folder to another directory, launching the server there, and continuing my dev work in the original directory? Or would I want to use a version control system instead? My intuition is that it's the latter, but if so, it seems like a huge topic with many uses (e.g. collaboration, which doesn't apply here) and I don't really know where to start.
1) Seperate URLs www.yoursite.com vs test.yoursite.com. you can also do www.yoursite.com and www.yoursite.com/development, etc.. You could also create a /beta or /staging..
2) Keep seperate databases, one for production, and one for development. Write a script that will copy your live database into a dev database. Keep one database for each type of site you create. (You may want to create a beta or staging database for your tester).. Do your own work in the dev database. If you change the database structure, save the changes as a .sql file that can be loaded and run on the live site database when you turn those changes live.
3) Merge features into your different sites with version control. I am currently playing with a subversion setup for web apps that has my stable (trunk), one for staging, and one for development. Development tags + branches get merged into staging, and then staging tags/branches get merged into stable. Version control will let you manage your source code in any way you want. You will have to find a methodology that works for you and use it.
4) Consider build automation. It will publish your site for you automatically. Take a look at http://ant.apache.org/. It can drive a lot of automatically checking out your code and uploading it to each specific site as you might need.
5) Toy of the month: There is a utility called cUrl that you may find valuable. It does a lot from the command line. This might be okay for you to do in case you don't want to use all or any of Ant.
Good luck!
You would typically use version control, and have two domains: your-site.com and test.your-site.com. Then your-site.com would always update to trunk which is the current latest, shipping version. You would do your development in a branch of trunk and test.your-site.com would update to that. Then you periodically merge changes from your development branch to trunk.
Jas Panesar has the best answer if you are asking this from a development standpoint, certainly. That is, if you're just asking how to easily keep your new developments separate from the site that is already running. However, if your question was actually asking how to run both versions simultaniously, then here's my two cents.
Your setup has a lot to do with this, but I always recommend running process-based web servers in the first place. That is, not to use threaded servers (less relevant to this question) and not embedding in the web server (that is, not using mod_python, which is the relevant part here). So, you have one or more processes getting HTTP requests from your web server (Apache, Nginx, Lighttpd, etc.). Now, when you want to try something out live, without affecting your normal running site, you can bring up a process serving requests that never gets the regular requests proxied to it like the others do. That is, normal users don't see it.
You can setup a subdomain that points to this one, and you can install middleware that redirects "special" user to the beta version. This allows you to unroll new features to some users, but not others.
Now, the biggest issues come with database changes. Schema migration is a big deal and something most of us never pay attention to. I think that running side-by-side is great, because it forces you to do schema migrations correctly. That is, you can't just shut everything down and run lengthy schema changes before bringing it back up. You'd never see any remotely important site doing that.
The key is those small steps. You need to always have two versions of your code able to access the same database, so changes you make for the new code need to not break the old code. This breaks down into a few steps you can always make:
You can add a column with a default value, or that is optional. The new code can use it, and the old code can ignore it.
You can update the live version with code that knows to use a new column, at which point you can make it required.
You can make the new version ignore a column, and when it becomes the main version, you can delete that column.
You can make these small steps to migrate between any schemas. You can iteratively add a new column that replaces an old one, roll out the new code, and remove the old column, all without interrupting service.
That said, its your first web app? You can probably break it. You probably have few users :-) But, it is fantastic you're even asking this question. Many "professionals" fair to ever ask it, and even then fewer answer it.
What I do is have an export a copy of my SVN repository and put the files on the live production server, and then keep a virtual machine with a development working copy, and submit the changes to the repo when Im done.