I have a team build (upgrade template, tfs2010, msbuild) compiling and testing a WCF service. We use psexec and the exec task to remote install the service (wix installer) on the web server, prior to running an integration test suite against it. However sometimes our nightly build fails with a compilation error - can only see the first 1024 bytes and most of it is css styles. I've tried to delay the tests with sleeps, thinking it might be due to long JIT. However all 600+ integration tests fails. In the build log it seems that the exec task with psexec executes synchronously, as expected, and returns an exit 0. Could anyone come up with a reason to why this occurs now and then?
This doesn't sound like it's anything specific to TFS, msbuild or psexec -- it sounds like there may be an interim installation, configuration, or coding problem with the service. The point of CI and integration tests is to get early feedback on your process, and apparently something is wrong. The trick is drilling down into the problem and ruling out where the issue(s) reside.
Psexec claims the WiX deployment went fine, but did it? Are all files present? Were previous versions of the installation properly removed, or did they get upgraded correctly?
All 600 tests fail, but the tests don't contain the proper stack trace - can you reproduce the problem outside of the tests? Eg, when the tests fail can you emulate a test manually or run one of the existing tests with a debugger attached to see the same stack trace? One strategy might be to identify one or two specific tests that accurately validate the deployment -- run just these tests after the deployment and if these tests fail you should abort the build and then leave the server in a failed state for deeper analysis. this may require customization the build template but it may be worth the effort.
Can you add logging to the WCF service? Better logging to the tests?
Lastly, CI as mentioned previously is about early feedback. The general rule is, "If something is painful then you should do it more often." Focus on pain points, isolate them and iteratively improve them. When the pain diminishes, focus on other pain points. In your case, consider running your "nightly" build in a rolling fashion - you'll find your intermittent problem after a few runs rather than weeks.
Related
As a preface, our setup is somewhat unusual due to "legacy" reasons. It is fully possible that I am going against the grain with this. I would love to get an expert opinion on whether it is possible to get my current setup working or a suggestion on a different approach.
Environment
Java application with over 10K unit tests in JUnit. For legacy reasons the entire unit test run takes a long time (the ultimate goal is to fix the root of the problem, but this will not happen soon).
The application is broken up into multiple modules, with each module having its own unit tests. Executing tests module by module takes a reasonable amount of time, so that if someone commits the code to repo subtree with module's code and only module's tests get executed, they can get a result quickly.
Current Jenkins Setup
JUnit job
This is the single parameterized job that can run tests for any module. The job takes in as parameters the regexs for which tests to run and a parameter indicating which module it is running, for notification purposes. It checks out the whole repo tree and then does the run based on the parameters.
After the completion of the run this job does the analysis of JUnits, publishes the report and sends out email notifications.
Repo watchers
One repo watcher for each module. The watcher checks out only the repo subtree that it wants to monitor. When a change is detected it triggers the JUnit job telling it which tests to run and for what component this is.
Question
In general the setup works well, does exactly what I need, however it breaks a few of the nice and expected features of Jenkins and JUnit plugin.
Because the same job keeps executing different subsets of unit tests, the job to job comparison between unit tests does not provide any value. Without manually scanning between jobs it is not possible to tell what changed in terms of new failures or new fixes to unit tests.
Something very similar happens to change history:
Each repo watches runs on its own schedule. Suppose we have a change to module A and a change to module B, very close time wise to each other. If watcher A triggers first, the JUnit job triggered by watcher A will "claim" both changes. When the JUnit job triggered by watcher B runs, it will not detect any new changes in the repo. This plays havoc with email notifications, as the second JUnit job would not know who broke the build.
At the end of the day I believe I am looking for a way to establish dependency relationship between non sequential job runs in Jenkins for the same job, or alternatively a totally different approach.
Thank you!
Okay so lets see if I get this right in basic language,
You want to track what failures are caused by what changeset?
In which case I would suggest the following: Again in simple term - you need to adapt this to your current setup.
Setup a job that manages results
This job should be parameterized to take the name or change number and publish ALL the results.
This job should be triggered following each test run is completed to consolidate all results
now if a new test or failure is introduced the same job can track it and email the person who caused the failure.
Jenkins is very powerful and very generic so pretty much every secanrio if not plugins with groovy it can be resolved. So I would suggest taking a pen and map it on the board and create process of it rather than just a single job.
So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.
When developing, my team obviously uses development as our environment.
When we run automated tests, we use testing.
We also have staging and production environments, respectively used for our testers to check out features and the final "live" product.
We're trying to setup an internal CI server to run our automated tests against and to eventually assist with automated deployments.
Since the CI server is really running automated tests, some think it should be run in testing environment. However, in order for the CI server to actually be useful, my thoughts are that it needs to be run in production mode with as close-as-possible a mirror of the actual production environment (without touching the production DB, obviously).
Is there an accepted environment that a CI server should be executed under? production environment (with different DB) seems the only logical answer to me, but I may be missing something...
Running any tests on PROD environment as you said
seems the only logical answer
but is not quite true. There are risks that your tests can seriously damage the actual environment/application to a point where you'll face a recovery option. After all the dark side of testing is to show/find that your software has not only minor bugs and it is working not as expected.
I can think of at least these 'why not test production' considerations:
when the product is launched, the customer rely on it. Expecting that your software is working ()being already tested). Your live environment should do its job and not be loaded with tests. If the product misbehaved (or did not perform), the technical team have to be sent to to cover the damage, fix the gaps and make it run hassle free. Now this not only affected the product cost, but delayed the project deadlines in a major way. This will make a recursive effect at the vendors profits and next few projects.
the production or development team when completes a product development at their end, have to produce this test environment for testing team prior to loading their newly developed product on that environment for testing.
To me, no matter that you
also have staging and production environments
it is essential to use the Test one accordingly. Further more Testing environment should be (configured) as close as it gets to the Production. Also one person could be trying to test while another person breaks the thing that he has been testing. With out the two being separate their is no way to do proper testing.
Just to be full answer, your STAGE environment can have different roles depending on the company.
One is that it can be the QA/STAGE environment that has an exact copy of production which is used for both QA and system testing (testing of the system when a lot of updates/changes or upgrade is going to go into production).
UPDATE:
That was my point too. The QA environment should be a mirror of the PROD. Possible solution about your issue with caching/pre-loading files onto staging/production is creation of pre-/post-steps .bat (let's assume) files.
In our current Test project we use this approach. In pre-steps we set-up files needed for test execution (like removing files from previous runs and downloading latest copies/artifacts). In post-steps we set up reporting files needed.The advantage is that your files will be collected and sync before every execution.
About the
not on the same physical hardware
in my case we support dedicated remote Test server. Advantages are clear, only thing that you need to be considered is that it'll require maintenance (administration).
let me begin by saying I 'm a coldfusion newbie.
I 'm trying to research if its possible to do the following and what would be the best approach to achieve it.
Whenever a developer checks in code into SVN, I would like to do a get all the new changes/files and do an auto build to check if the code can be deployed successfully to production server. I guess there are two parts to it, one syntax checking and second integration test(if functionality is working as expected). For the later part some unit test tools would have to be used.
Can someone comment on their experience doing something similar for coldfusion.
Sorry for being a bit vague...I know its a very open-ended question but any feedback would be appreciated.
Thanks
There's a project called "Cloudy With A Chance of Tests" that purports to do what you require. In particular it brings together a number of other CFML code analysis projects (VarScope & QueryParam) to check code, as well as unit testing. I am not currently using it myself but did have a look at it some time ago (more than 12 months) and it appeared to be quite good.
https://github.com/mhenke/Cloudy-With-A-Chance-Of-Tests
Personally I run MXUnit tests in Jenkins using the instructions from the MXUnit site - available here:
http://wiki.mxunit.org/display/default/Continuous+Integration+--+Running+tests+with+Jenkins
Essentially this is set up as an ant task in Jenkins, which executes the MXUnit tests and reports back the results.
We're not doing fully continuos integration, but we have a process which automates some of the drudgery of our builds:
replace the site's application.cf(m|c) with one that tells users that the app is being deployed (we had QA staff raising defects that were due to re-deployments)
read a database manifest XML which lists all SQL scripts which make up the current release. We concatenate the scripts into a single upgrade script, suitable for shipping
execute the SQL script against the server's DB, noting any errors. The concatenation process also adds a line of SQL after each imported script that white to a runlog table, so we can see what ran, how long it took and which build it was associated with. If you're looking to replicate this step, take a look at Liquibase
deploy the latest code
make an http call to a ?reset=true type URL to tell the app to re-initialize
execute any tests
The build is requested manually through the build servers we have, but you click a button, make tea and it's done.
We've just extended the above to cope with multiple servers in a cluster and it ticks along nicely. I think the above suggestion of using the Jenkins SVN plugin to automate the process sounds like the way to go.
Joel seems to think highly of daily builds. For a traditional compiled application I can certainly see his justification, but how does this parallel over to web development -- or does it not?
A bit about the project I'm asking for --
There are 2 developers working on a Django (Python) web app. We have 1 svn repository. Each developer maintains a checkout and thier own copy of MySQL running locally (if you're unfamiliar with Django, it comes bundled with it's own test server, much the way ASP apps can run inside of Visual Studio). Development and testing are done locally, then committed back to the repository. The actual working copy of the website is an SVN checkout (I know about SVN export and it takes too long). The closest we have to a 'build' is a batch file that runs an SVN update on the working copy, does the django bits ('manage.py syncdb'), updates the search engine cache (solr), then restarts apache.
I guess what I don't see is the parallel to web apps.
Are you doing a source controlled web app with 'nightly builds' -- if so, what does that look like?
You can easily run all of your Django unit tests through the Django testing framework as your nightly build.
That's what we do.
We also have some ordinary unit tests that don't leverage Django features, and we run those, also.
Even though Python (and Django) don't require the kind of nightly compile/link/unit test that compiled languages do, you still benefit from the daily discipline of "Don't Break The Build". And a daily cycle of unit testing everything you own is a good thing.
We're in the throes of looking at Python 2.6 (which works perfectly for us) and running our unit tests with the -3 option to see which deprecated features we're using. Having a full suite of unit tests assures us that a change for Python 3 compatibility won't break the build. And running them nightly means that we have to be sure we're refactoring correctly.
Continuous integration is useful if you have the right processes around it. TeamCity from JetBrains is a great starting point if you want to build familiarity:
http://www.jetbrains.com/teamcity/index.html
There's a great article that relates directly to Django here:
http://www.ajaxline.com/continuous-integration-in-django-project
Hope this gets you started.
Web applications built in dynamic languages may not require a "compilation" step, but there can still be a number of "build" steps involved in getting the app to run. Your build scripts might install or upgrade dependencies, perform database migrations, and then run the test suite to insure that the code is "clean" w.r.t. the actual checked-in version in the repository. Or, you might deploy a copy of the code to a test server, then run a set of Selenium integration tests against the new version to insure that core site functionality still works.
It may help to do some reading on the topic of Continuous Integration, which is a very useful practice for webapp dev teams. The more fast-paced and agile your development process, the more you need regular input from automated testing and quality metrics to make sure you fail fast and loud on any broken version of the code.
If it's really just you and one other developer working on it, nightly builds are probably not going to give you much.
I would say that the web app equivalent of nightly builds would be staging sites (which can be built nightly).
Where nightly builds to a staging area start paying real dividends is when you have clients, project managers, and QA people that need to be able to see an up to date, but relatively stable version of the app. Your developer sandboxes (if you're like me, at least) probably spend a lot of time in an unusable state as you're breaking things trying to get the next feature implemented. So the typical problem is that a QA person wants to verify that a bug is fixed, or a PM wants to check that some planned feature was implemented correctly, or a client wants to see that you've made progress on the issue that they care about. If they only have access to developer sandboxes, there's a good chance that when they get around to looking at it, either the sandbox version isn't running (since it means ./manage.py runserver is up in a terminal somewhere) or it's in a broken state because of something else. That really slows down the whole team and wastes a lot of time.
It sounds like you don't have a staging setup since you just automatically update the production version. That could be fine if you're way more careful and disciplined than I (and I think most developers) am and never commit anything that isn't totally bulletproof. Personally, I'd rather make sure that my work has made it through at least some cursory QA by someone other than me before it hits production.
So, in conclusion, the setup where I work:
each developer runs their own sandbox locally (same as you do it)
there's a "common" staging sandbox on a dev server that gets updated nightly from a cronjob. PMs, clients, and QA go there. They are never given direct access to developer sandboxes.
There's an automated (though manually initiated) deployment to production. A developer or the PM can "push" to production when we feel things have been sufficiently QA'd and are stable and safe.
I'd say the only downside (besides a bit of extra overhead setting up the nightly staging builds) is that it makes for a day of turnaround on bug verification. ie, QA reports a bug in the software (based on looking at that day's nightly build), developer fixes bug and commits, then QA must wait until the next day's build to check that the bug is actually fixed. It's usually not that much of a problem since everyone has enough stuff going on that it doesn't affect the schedule. When a milestone is approaching though and we're in a feature-frozen, bugfix only mode, we'll do more frequent manual updates of the staging site.
I've had great success using Hudson for continuous integration. Details on using Hudson with Python by Redsolo.
A few months ago, several articles espousing continuous deployment caused quite a stir online. IMVU has details on how they deploy up to 5 times a day.
The whole idea behind frequent builds (nightly or more frequent like in continuous integration) is to get immediate feedback in order to reduce the elapsed time between the introduction of a problem and its detection. So, building frequently is useful only if you are able to generate some feedback through compilation, (ideally automated) testing, quality checks, etc. Without feedback, there is no real point.