Continuous deployment of C/C++ executable to Linux production servers - c++

I wonder if there is any best practice or at least a more practical way to deploy C/C++ executable to Linux based production servers.
I have Jenkins up and running as CI server, and created a main SVN module which contains multiple svn:externals. This module is mainly served as a pipeline of related C++ applications. (Perhaps I should post this an another question on whether svn:externals is the correct way to do it)
So the main question is the deployment steps, I am planing to make all production servers as Jenkins' slaves with parameterized config, for the purpose of building from SVN tags. And use some scripts to copy all executables to, eg: /opt/mytools/bin in multiple production servers.
Any recommendations?

The best deployment route is the one specified by your distribution, IMHO. That is, for debian packages, bundle your applications into .deb-files, put them into a repository and let apt-get take care of the rest. This way, you have a minimal impact on the production environment and most admins are already familiar with the deployment scheme.

I'm working through some of the same questions, and I'm finding that Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Humble and Farley has been a good (technology agnostic) starting point - not perfect but it's pointed me in the right direction when I had no idea what to do next.
The continuous delivery book recommends setting up 'build pipelines' in which you run progressively more and more automated tests, with only the final manual tests and deploy rollback options being triggered by a real person.

Related

Rails app, Continuous Integration/Deployment Environments

When developing, my team obviously uses development as our environment.
When we run automated tests, we use testing.
We also have staging and production environments, respectively used for our testers to check out features and the final "live" product.
We're trying to setup an internal CI server to run our automated tests against and to eventually assist with automated deployments.
Since the CI server is really running automated tests, some think it should be run in testing environment. However, in order for the CI server to actually be useful, my thoughts are that it needs to be run in production mode with as close-as-possible a mirror of the actual production environment (without touching the production DB, obviously).
Is there an accepted environment that a CI server should be executed under? production environment (with different DB) seems the only logical answer to me, but I may be missing something...
Running any tests on PROD environment as you said
seems the only logical answer
but is not quite true. There are risks that your tests can seriously damage the actual environment/application to a point where you'll face a recovery option. After all the dark side of testing is to show/find that your software has not only minor bugs and it is working not as expected.
I can think of at least these 'why not test production' considerations:
when the product is launched, the customer rely on it. Expecting that your software is working ()being already tested). Your live environment should do its job and not be loaded with tests. If the product misbehaved (or did not perform), the technical team have to be sent to to cover the damage, fix the gaps and make it run hassle free. Now this not only affected the product cost, but delayed the project deadlines in a major way. This will make a recursive effect at the vendors profits and next few projects.
the production or development team when completes a product development at their end, have to produce this test environment for testing team prior to loading their newly developed product on that environment for testing.
To me, no matter that you
also have staging and production environments
it is essential to use the Test one accordingly. Further more Testing environment should be (configured) as close as it gets to the Production. Also one person could be trying to test while another person breaks the thing that he has been testing. With out the two being separate their is no way to do proper testing.
Just to be full answer, your STAGE environment can have different roles depending on the company.
One is that it can be the QA/STAGE environment that has an exact copy of production which is used for both QA and system testing (testing of the system when a lot of updates/changes or upgrade is going to go into production).
UPDATE:
That was my point too. The QA environment should be a mirror of the PROD. Possible solution about your issue with caching/pre-loading files onto staging/production is creation of pre-/post-steps .bat (let's assume) files.
In our current Test project we use this approach. In pre-steps we set-up files needed for test execution (like removing files from previous runs and downloading latest copies/artifacts). In post-steps we set up reporting files needed.The advantage is that your files will be collected and sync before every execution.
About the
not on the same physical hardware
in my case we support dedicated remote Test server. Advantages are clear, only thing that you need to be considered is that it'll require maintenance (administration).

Continuous build system for Qt

I am a Qt/C++ developer. I would like to setup a continuous integration environment whereby after committing the source code, it triggers a build process that build the code for the 3 platforms I'm using:
Linux
OS X
Win32
If possible, how do I setup such environment. Any hints or links are welcome.
I've read around about Jenkins, but I can't find any good tutorial for it.
I also suggest Jenkins for several reasons:
It will run on all of the platforms you listed.
It can be configured to start a build when the repository is updated (hint: configure the Job to "Poll SCM" and you won't have to muck with your SCM tool to get it to tell Jenkins to start building).
It provides good support (mostly through plugins) for Unit Testing. [You're project is doing unit testing, right?]
The price is right
A bigger issue is going to have is that AFAIK, Qt doesn't really do cross-compiling for other platforms well. Using Jenkins (and the appropriate plugins), you should be able to solve this.
One method that comes quickly to mind is to have an instance of Jenkins on each platform. Each instance is responsible for building the version for its own platform. At the end of the build, the created artifacts are all put into a common, shared location.
Jenkins supports this feature via plugins for all major source control systems. If you seriously considering using Jenkins (and I would highly recommend it), consider buying John Ferguson Smart's Jenkins: The Definitive Guide.
Two solutions coming to my mind:
BuildBot
BuildBot is a highly customizable continuous integration system written in Python. The master component offers a nice web-based GUI to monitor and trigger builds; slave components are put on the target machines (usually virtual machines but they could be the Mac laptop of one of the developers). Docs are good enough to build up a basic system, customization could be a little tricky (at least it was for me). Using commit/push hooks provided by VC systems you can easily activate the master and trigger builds across the slaves. It also supports incremental builds (a must if your project is big).
CDash
Developed by the authors of CMake, CDash is a web application collecting builds coming from across the network, not exactly what you asked for but I think it's worth a try. Very powerful if you have a team of developers who could continuosly submit build result on their machines to the server (and if you use CMake it's almost transparent). You cannot trigger builds from the server as Buildbot does, but you could setup a bunch of VM with a cron which checks for changes and in case performs the build and sends results to CDash
Sure it's possible. Most of the version control systems are able to execute custom script on server side. Some of them (git, for example), has hooks to achieve the same locally. Have a look at git's post-commit hook.
All you need is to create a script that will trigger cross-platform builds.
Most version control systems allow post-commit hooks to allow you to kick off events like builds. Alternatively build systems can be configured to regularly poll a source control repository and manage their own build scheduling (this is how we use Jenkins).
Something to bear in mind is how long it will take to do a complete build across platforms and the typical number of check-ins in that interval. You might find batching check-ins a better way of doing continuous integration builds if you have an fair sized team or limited build server resources. Otherwise your build system could quickly end up trying to play catch up.
As for whether it is possible to build on all target platforms, that depends on your tool chain.

Django and multi-stage servers

I am working with a client that demands multi-stage server setup: development server, stage server and production/live server.
Stage should be as stable as it can be to test all those new features we develop at the development server and take this to the live server in the end.
We use git and github for version controlling. I use Ubuntu server edition as the OS.
The problem is, I have never working in such multi-stage server plan. What software/projects would you recommend to do a proper way of handling such setup, especially deployment and moving a new feature developed to the stage and then to the live server ?
We use two different methods of moving code from environment to environment. The first is to use branches and triggers with our source control system (mercurial in our case, though you can do the same thing with git). The other, is to use fabric, a python library for executing shell code across a number of servers.
Using source control, you can have several main branches, like production development staging. Say you want to move a new feature into staging. I'll explain in terms of mercurial, but you can port the commands over to git and it should be fine.
hg update staging
hg merge my-new-feature
hg commit -m 'my-new-feature > staging'
hg push
You then have your remote source control server push to all of your web servers using a trigger. A trigger on each web server will then do an update and reload the web server.
To move from staging to production, it's just as easy.
hg update production
hg merge staging
hg commit -m 'staging > production'
hg push
It's not the nicest method of deployment, and it makes rolling back quite hard. But it's quick and easy to set up, and still a lot better than manually deploying each change to each server.
I won't go through fabric, as it can get quite involved. You should read their documentation so you understand what it is capable of. There are plenty of tutorials around for fabric and django. I highly recommend the fabric route as it gives you lots more control, and only involves writing some python.
There is a nice branching model for git (as it is also used by github itself for example). You can easily apply this branching model using git-flow, which is a git extension that enables you to apply some high level repository operations that fit into this model. There's also a nice blogpost about this.
I do not know what exactly you want to automize in your deployment workflow, but if you apply the model mentioned above, most of the correct version handling is done by git.
To add some further automatic processing to this, fabric is a simple but great tool, and you will find many tutorials about its usage (also in combination with git).
For handling python dependencies using virtualenv and pip is for sure a very good way to go.
If you need something more complex,eg. to handle more than one django instance on one machine and for handling system wide dependencies etc checkout puppet or chef.
Try Gondor.io or Ep.io, they both make it pretty easy (gondor especially excels in this area) to have two+ instances with very similar code, from your VCS -- and to move data back and forth. (if you need an invite, ask either in IRC, but if I recall, they're both open now)

Utilizing EC2 or Azure to scale out a distributed C# build system?

Quite a few build and CI systems support steps for pushing build output to Azure, but I haven't seen any which can actually run on Azure (or EC2). Ideally I would like to be able to spin up an arbitrary number of instances (depending on the # of pending submits) to deal with the actual build + quality gates (UTs, FXCop, other static analysis tools) + source repository checkin process.
Are there existing tools which can do this, or has anyone built something which they can discuss?
Thanks!
[Edit: I found this question which is quite similar but didn't have any informative answers, so I'll keep my question alive]
If you're using Git or Mercurial for source control, AppHarbor might be what you're looking for. It's a CI build/deploy environment that runs exclusively in the cloud (EC2), and can deploy build output to Azure.
Here are some links for reference:
http://sourcecodebean.com/archives/appharbor-heroku-for-net/987
http://lostechies.com/chrismissal/2011/03/12/using-appharbor-for-continuous-integration
http://haacked.com/archive/2011/05/12/making-let-me-bing-that-for-you-open-source.aspx
http://appharbor.com/page/pricing
The open souce Jenkins CI server has an EC2 plugin that will spin up EC2 instances automatically depending on your build load. I couldn't find anything for Azure, but I highly recommend Jenkins - it's easy to configure, well maintained and has stacks of features.
Continuous Integration on Windows Azure http://code.google.com/p/cassis/ (over Mercurial)
Disclaimer: work produced by my 1st year CS students
Also Teamcity has support for this: http://www.jetbrains.com/teamcity/features/amazon_ec2.html

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.