When developing, my team obviously uses development as our environment.
When we run automated tests, we use testing.
We also have staging and production environments, respectively used for our testers to check out features and the final "live" product.
We're trying to setup an internal CI server to run our automated tests against and to eventually assist with automated deployments.
Since the CI server is really running automated tests, some think it should be run in testing environment. However, in order for the CI server to actually be useful, my thoughts are that it needs to be run in production mode with as close-as-possible a mirror of the actual production environment (without touching the production DB, obviously).
Is there an accepted environment that a CI server should be executed under? production environment (with different DB) seems the only logical answer to me, but I may be missing something...
Running any tests on PROD environment as you said
seems the only logical answer
but is not quite true. There are risks that your tests can seriously damage the actual environment/application to a point where you'll face a recovery option. After all the dark side of testing is to show/find that your software has not only minor bugs and it is working not as expected.
I can think of at least these 'why not test production' considerations:
when the product is launched, the customer rely on it. Expecting that your software is working ()being already tested). Your live environment should do its job and not be loaded with tests. If the product misbehaved (or did not perform), the technical team have to be sent to to cover the damage, fix the gaps and make it run hassle free. Now this not only affected the product cost, but delayed the project deadlines in a major way. This will make a recursive effect at the vendors profits and next few projects.
the production or development team when completes a product development at their end, have to produce this test environment for testing team prior to loading their newly developed product on that environment for testing.
To me, no matter that you
also have staging and production environments
it is essential to use the Test one accordingly. Further more Testing environment should be (configured) as close as it gets to the Production. Also one person could be trying to test while another person breaks the thing that he has been testing. With out the two being separate their is no way to do proper testing.
Just to be full answer, your STAGE environment can have different roles depending on the company.
One is that it can be the QA/STAGE environment that has an exact copy of production which is used for both QA and system testing (testing of the system when a lot of updates/changes or upgrade is going to go into production).
UPDATE:
That was my point too. The QA environment should be a mirror of the PROD. Possible solution about your issue with caching/pre-loading files onto staging/production is creation of pre-/post-steps .bat (let's assume) files.
In our current Test project we use this approach. In pre-steps we set-up files needed for test execution (like removing files from previous runs and downloading latest copies/artifacts). In post-steps we set up reporting files needed.The advantage is that your files will be collected and sync before every execution.
About the
not on the same physical hardware
in my case we support dedicated remote Test server. Advantages are clear, only thing that you need to be considered is that it'll require maintenance (administration).
Related
This may seem like a very broad question, but i am really interested to know about possible approaches. Our team has a Django Web app and we have huge amount of unit tests for our features. Now in github, we have master branch, develop branch, and individual feature/bug branches. Now the problem i want to solve is,
Every time some code is merged into develop branch, i want to run all(or subset) of unit tests against that branch. It would be cool to have it automated, i-e i do not have to trigger the test run.
I have read and heard about Jenkins - http://michal.karzynski.pl/blog/2014/04/19/continuous-integration-server-for-django-using-jenkins/. Currently one of the approaches i am leaning towards.
But i wanted to know if there are better approaches or tools which i can use.
Appreciate all your help.
For what it's worth, you can't really go wrong with Jenkins for the functionality you are looking to achieve.
Although Travis CI may be a better option given that it's meant to work seamlessly with Github and it appears all of your repositories have been moved to Github.
Really depends on your business needs though.
Getting Jenkins up and running, from past experiences, has always gone very smoothly and it gives you the benefit of keeping all data in house as you have the option to host Jenkins on your own private servers but probably doesn't scale or run as efficiently as Travis CI does depending on your setup.
Travis CI will probably allow for an even more seamless approach because it's already being hosted for you and tied directly into Github, but you won't get the privacy as running Jenkins on your own servers. There is a paid option though it appears for Travis CI which again, depending on your business needs, may be a better option.
Currently, our unit tests are committed along with the application code and executed by a build bot job on each commit. Likewise, code coverage is calculated. However, both UTs and coverage are - or can be - conducted by the developers before they commit new features to the repository, so the CI process does not seem to add any value.
What kind of tests do you suggest should be executed on the CI server which are not executed by the developer's prior to commit?
so the CI process does not seem to add any value
No?
What happens if a developer doesn't run the tests before committing?
What happens if a developer commits a change which conflicts with a change committed by another developer?
What happens if the tests pass on the developer's machine but not on other machines?
What happens if another developer has changed a test?
...
CI isn't "continuous testing", it's "continuous integration". The need to run the tests as part of the integration build is to validate that the changes committed can successfully integrate with what's already there. Whether or not the tests passed on the developer's local potentially-non-integrated workstation is immaterial.
The unit tests (any reasonably fast automated tests, really) should be executed by the CI server to validate the current state of the build. That state may be different than what is on one individual developer's workstation.
They may be the same tests which were just run by the developer moments prior. But the context in which the tests run is physically different and the need to run them is semantically different. Unless you're being charged by the clock cycle, there's no reason to omit running tests.
David raises very good points. One thing that he didn't mention though is that automated testing in a continuously integrated environment can be even more powerful when it goes beyond unit tests. The CI process allows you to run integration and system level tests that would be too expensive to run on a dev box. For example, you may have unit tests for your persistence layer that run against an in-memory database. Your CI server however can run these same automated tests against a snapshot of your production database.
Completely agree with previous posts, as Unit testing is mostly done by devs. So tests that should be executed as part of the CI process is pretty much opinion based. Depending of the goals of the team/project.
The thing that is also important is that CI (server) gives you a separate testing environment. So your test effort and execution can run independent. Your test are executed in a clone of the production environment.
In my expirience I've used CI server mostly for System,Integration, Functional, Regression testing and UAT.
I wonder if there is any best practice or at least a more practical way to deploy C/C++ executable to Linux based production servers.
I have Jenkins up and running as CI server, and created a main SVN module which contains multiple svn:externals. This module is mainly served as a pipeline of related C++ applications. (Perhaps I should post this an another question on whether svn:externals is the correct way to do it)
So the main question is the deployment steps, I am planing to make all production servers as Jenkins' slaves with parameterized config, for the purpose of building from SVN tags. And use some scripts to copy all executables to, eg: /opt/mytools/bin in multiple production servers.
Any recommendations?
The best deployment route is the one specified by your distribution, IMHO. That is, for debian packages, bundle your applications into .deb-files, put them into a repository and let apt-get take care of the rest. This way, you have a minimal impact on the production environment and most admins are already familiar with the deployment scheme.
I'm working through some of the same questions, and I'm finding that Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Humble and Farley has been a good (technology agnostic) starting point - not perfect but it's pointed me in the right direction when I had no idea what to do next.
The continuous delivery book recommends setting up 'build pipelines' in which you run progressively more and more automated tests, with only the final manual tests and deploy rollback options being triggered by a real person.
Joel seems to think highly of daily builds. For a traditional compiled application I can certainly see his justification, but how does this parallel over to web development -- or does it not?
A bit about the project I'm asking for --
There are 2 developers working on a Django (Python) web app. We have 1 svn repository. Each developer maintains a checkout and thier own copy of MySQL running locally (if you're unfamiliar with Django, it comes bundled with it's own test server, much the way ASP apps can run inside of Visual Studio). Development and testing are done locally, then committed back to the repository. The actual working copy of the website is an SVN checkout (I know about SVN export and it takes too long). The closest we have to a 'build' is a batch file that runs an SVN update on the working copy, does the django bits ('manage.py syncdb'), updates the search engine cache (solr), then restarts apache.
I guess what I don't see is the parallel to web apps.
Are you doing a source controlled web app with 'nightly builds' -- if so, what does that look like?
You can easily run all of your Django unit tests through the Django testing framework as your nightly build.
That's what we do.
We also have some ordinary unit tests that don't leverage Django features, and we run those, also.
Even though Python (and Django) don't require the kind of nightly compile/link/unit test that compiled languages do, you still benefit from the daily discipline of "Don't Break The Build". And a daily cycle of unit testing everything you own is a good thing.
We're in the throes of looking at Python 2.6 (which works perfectly for us) and running our unit tests with the -3 option to see which deprecated features we're using. Having a full suite of unit tests assures us that a change for Python 3 compatibility won't break the build. And running them nightly means that we have to be sure we're refactoring correctly.
Continuous integration is useful if you have the right processes around it. TeamCity from JetBrains is a great starting point if you want to build familiarity:
http://www.jetbrains.com/teamcity/index.html
There's a great article that relates directly to Django here:
http://www.ajaxline.com/continuous-integration-in-django-project
Hope this gets you started.
Web applications built in dynamic languages may not require a "compilation" step, but there can still be a number of "build" steps involved in getting the app to run. Your build scripts might install or upgrade dependencies, perform database migrations, and then run the test suite to insure that the code is "clean" w.r.t. the actual checked-in version in the repository. Or, you might deploy a copy of the code to a test server, then run a set of Selenium integration tests against the new version to insure that core site functionality still works.
It may help to do some reading on the topic of Continuous Integration, which is a very useful practice for webapp dev teams. The more fast-paced and agile your development process, the more you need regular input from automated testing and quality metrics to make sure you fail fast and loud on any broken version of the code.
If it's really just you and one other developer working on it, nightly builds are probably not going to give you much.
I would say that the web app equivalent of nightly builds would be staging sites (which can be built nightly).
Where nightly builds to a staging area start paying real dividends is when you have clients, project managers, and QA people that need to be able to see an up to date, but relatively stable version of the app. Your developer sandboxes (if you're like me, at least) probably spend a lot of time in an unusable state as you're breaking things trying to get the next feature implemented. So the typical problem is that a QA person wants to verify that a bug is fixed, or a PM wants to check that some planned feature was implemented correctly, or a client wants to see that you've made progress on the issue that they care about. If they only have access to developer sandboxes, there's a good chance that when they get around to looking at it, either the sandbox version isn't running (since it means ./manage.py runserver is up in a terminal somewhere) or it's in a broken state because of something else. That really slows down the whole team and wastes a lot of time.
It sounds like you don't have a staging setup since you just automatically update the production version. That could be fine if you're way more careful and disciplined than I (and I think most developers) am and never commit anything that isn't totally bulletproof. Personally, I'd rather make sure that my work has made it through at least some cursory QA by someone other than me before it hits production.
So, in conclusion, the setup where I work:
each developer runs their own sandbox locally (same as you do it)
there's a "common" staging sandbox on a dev server that gets updated nightly from a cronjob. PMs, clients, and QA go there. They are never given direct access to developer sandboxes.
There's an automated (though manually initiated) deployment to production. A developer or the PM can "push" to production when we feel things have been sufficiently QA'd and are stable and safe.
I'd say the only downside (besides a bit of extra overhead setting up the nightly staging builds) is that it makes for a day of turnaround on bug verification. ie, QA reports a bug in the software (based on looking at that day's nightly build), developer fixes bug and commits, then QA must wait until the next day's build to check that the bug is actually fixed. It's usually not that much of a problem since everyone has enough stuff going on that it doesn't affect the schedule. When a milestone is approaching though and we're in a feature-frozen, bugfix only mode, we'll do more frequent manual updates of the staging site.
I've had great success using Hudson for continuous integration. Details on using Hudson with Python by Redsolo.
A few months ago, several articles espousing continuous deployment caused quite a stir online. IMVU has details on how they deploy up to 5 times a day.
The whole idea behind frequent builds (nightly or more frequent like in continuous integration) is to get immediate feedback in order to reduce the elapsed time between the introduction of a problem and its detection. So, building frequently is useful only if you are able to generate some feedback through compilation, (ideally automated) testing, quality checks, etc. Without feedback, there is no real point.
I'm familiar with TDD and use it in both my workplace and my home-brewed web applications. However, every time I have used TDD in a web application, I have had the luxury of having full access to the web server. That means that I can update the server then run my unit tests directly from the server. My question is, if you are using a third party web host, how do you run your unit tests on them?
You could argue that if your app is designed well and your build process is sound and automated, that running unit tests on your production server isn't necessary, but personally I like the peace of mind in knowing that everything is still "green" after a major update.
For everyone who has responded with "just test before you deploy" and "don't you have a staging server?", I understand where you're coming from. I do have a staging server and a CI process set up. My unit tests do run and I make sure they all pass before an an update to production.
I realize that in a perfect world I wouldn't be concerned with this. But I've seen it happen before. If a file is left out of the update or a SQL script isn't run, the effects are immediately apparent when running your unit tests but can go unnoticed for quite some time without them.
What I'm asking here is if there is any way, if only to satisfy my own compulsive desires, to run a unit test on a server that I cannot install applications on or remote into (e.g. one which I will only have FTP access to in order to update files)?
I think I probably would have to argue that running unit tests on your production server isn't really part of TDD because by the time you deploy to your production environment technically speaking, you're past "development".
I'm quite a stickler for TDD, and when I'm preaching the benefits to clients I often find myself saying "you can't half adopt TDD, it's all or nothing"
What you probably should have is some form of automated testing that you perform "after" deployment but these are not part of TDD.
Maybe you should look at your process again.
You could write functional tests in something like WATIR, WATIN or Selenium that test what is returned in the reponse page after posting certain form data or requesting specific URLs.
For clarification: what sort of access do you have to your web server? FTP or WebDAV only? From your question, I'm guessing ssh access isn't available - you're dropping files in a directory to deploy. Is that correct?
If so, the answer for unit testing is likely 'do it before you deploy'. You can set up functional testing driven by an automated tool like Selenium to test your app remotely via the web interface, but that's not really unit testing the sense that you're restricted to testing the system as a whole.
Have you considered setting up a staging server, perhaps as a VMWare instance, that mirrors or at least mimics your deployment environment?
What's preventing you from running unit tests on the server? If you can upload your production code and let it run there, why can't you upload this other code and run it as well?
I've written test tools for sites using python and httplib/urllib2 generally it would have been overkill but it was suitable in these cases. Not sure it's going to be of general use though.