I am trying to improve our general automation process. We use VS2012 and TFS2012.
Here is what I want to happen upon checkin to our CI branch:
BUILD
Build the selected projects / solutions as configured in the build definition settings.
Generate a deployment package that can be used to deploy the websites (without having to rebuild the entire project again)
Generate a nuget package that can later be published (without having to rebuild the entire project again, i need the dlls to match the symbols created from indexing so we can debug them)
TEST - IF AND ONLY IF BUILD WAS SUCCESSFUL
Run all configured unit tests.
DEPLOY - IF AND ONLY IF ALL UNIT TESTS PASS This is to prevent breaking changes entering our development environment
Take deployment package from (1.2) and publish it to it's intended environment (hopefully configured using Publishing Profiles and transforms)
PUBLISH - IF AND ONLY IF ALL UNIT TESTS PASS
Take nuget package from (1.3) and publish it to our private nuget gallery
I don't need a full tutorial (although that would be awesome) for the entire process, but more how to go about integrating it.
For instance:
Should I use msbuild on a wrapper project?
How do I deal with creating the packages upon build on the TFS build server?
How can I enforce the "IF AND ONLY IF ALL UNIT TESTS PASS" constraints?
What is the best / easiest way to perform the deployment /publishing after as part of the build.
This is the process we want to use and any help is realising this is very much appreciated.
And I'm sure many other people are interested in how to set about integrating this style of process.
Also if it's relevant most solutions have a mix of shared dll projects, websites / apis, and unit tests. One of the reasons I want this process is to be able to split them up and modularise our large dlls into smaller isolated units, which would be to unmanageable ATM without this auto publish mechanism.
Thanks,
Gary.
BUILD Build the selected projects / solutions as configured in the build definition settings. Generate a deployment package that can be
used to deploy the websites (without having to rebuild the entire
project again)
This is out of the box, add deployment profile to your projects, call them 'Release'
Add the following to your MSBuild Arguments
/p:DeployOnBuild=true;PublishProfile=Release
you don't have to use Release, as long as your Publish Profiles match what you put in the MSBuild arguments
This will generate the deployment files as part of your build (MSDEPLOY)
Generate a nuget package that can later be published (without having to rebuild the entire project again, i need the dlls to match the symbols created from indexing so we can debug them)
See Nugetter on code plex http://nugetter.codeplex.com/
TEST - IF AND ONLY IF BUILD WAS SUCCESSFUL Run all configured unit
tests.
Should be out of the box, but you can change the build template to fail the build should compilation be unsucessful, if this suits your needs better.
DEPLOY - IF AND ONLY IF ALL UNIT TESTS PASS This is to prevent
breaking changes entering our development environment Take deployment
package from (1.2) and publish it to it's intended environment
(hopefully configured using Publishing Profiles and transforms)
PUBLISH - IF AND ONLY IF ALL UNIT TESTS PASS Take nuget package from
(1.3) and publish it to our private nuget gallery
See Nugetter on codeplex as listed above
Related
Im trying to figure out how to chain multiple 'promotions' (by a user clicking) whilst ensuring that ever build in the chain is not queued. By current setup is as follows, NOTE as my application is a white label the configuration described below is repeated for every site.
Build & Test - Creates zipped artifact
Deploy to Testing - Has artifact and snapshot dependency
Deploy to Staging - Has artifact and snapshot dependency
Deploy to Production Has artifact dependency
When promoting to production i want to do this across all websites (without having to manually click promote on each build).
I am currently trying the following strategy, to set the 'deploy to production' build to have a Artifact dependency, without a snapshot dependency so it doesn't queue down the chain. I have set the artifact to depend on the 'Build & Test' configuration to gain access to the zipped project and i have set it to build with a specific build number referencing a parameter in the production build.
After doing some googleing i found out that i am able to get the stagings build number using the rest api as follows:
http://teamcity_url/httpAuth/app/rest/builds/buildType:build_configuration_id/resulting-properties/build.number
And this works great, however i don't understand how i can get this value into the parameter?
Also i dont know if my approach is correct? is there a better way?
Set up the artifact dependencies chronologically (Build -> Test -> Staging -> Production) and all your snapshot dependencies to Build & Test. Depending on exact needs you might have a snapshot dependency on both Build and the one your artifact dependency is on.
Also make sure you enable "Do not run new build if there is a suitable one. This should keep it from queuing down the chain without intention.
Using the build chain tab will be important because the main project page only shows the last build ran. So clicking run from there will que the chain because you are asking for a new build, even though to you it might feel like your asking for the next step to be ran. The build chain tab helps keep things clear.
I am attempting to set up Java code coverage for a fairly complex app that
combines multiple large modules, only one of which I need to check coverage on
uses a combination of ant and Maven for builds
cannot be run except as an installed application on a server, with configuration
the automated tests to be analyzed for coverage are not part of the application build and make use of API calls to the application server from a remote client
The examples given in the jacoco documentation and in the online sources I have found assume the app under test is not previously installed and the tests are unit/integration tests run as part of the build. The documentation does not cover the details of how the jacoco instrumentation is done or when the call is recorded to a particular line of code. If I use ant or maven to instrument a particular module, use that module to build the full app, install it on a server, and configure it, will my remote tests then generate the .exec file?
Any advice on how to achieve the end goal (knowing how much of our code is covered by the tests) is greatly appreciated, including better search terms than "jacoco for installed app" which as you can imagine is ... not very useful. My google-fu is humbled.
So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.
We have a peculiar situation here that is causing our automated tests to fail on a newly created lab environment, using TFS 2012.
We've always had a bunch of 'unit' tests that tested our DAL code, which in turn uses the Enterprise Library Data Application Block to perform operations on the database. This was setup quite a few years ago, to enable our clients to choose either SqlServer or Oracle databases alongside our product, taking advantage of the DatabaseFactory class and all the supporting generic interfaces and classes in the entlib.data. I mentioned 'unit' like this because these are actually not pure unit tests but integration ones, seeing as they require a real database to work.
To test the same SQL code against both databases, we maintain two separate .config files inside a 'Resources' folder in our TFS project branch, pointing to our test databases:
Resources\SqlServer\ConnectionStrings.config (SqlServer specific connection strings)
Resources\Oracle\ConnectionStrings.config (Oracle specific connection strings)
In the root Resources folder, there are two accompanying .testsettings files, responsible for deploying files specific to each database:
Resources\SqlServer.testsettings (which deploys the SqlServer\ConnectionStrings.config file)
Resources\Oracle.testsettings (which deploys the Oracle\ConnectionStrings.config file)
Since the whole structure is in source control, the testsettings is able to find the .config files by using relative paths, allowing us to test everything without having to setup parameters manually.
On devs machines, we always select the SqlServer.testsettings file when running the tests, so that they don't need to have the whole oracle environment installed to validate their changes before checking in the code. The Oracle side of the validation always occurred in our build process, where we actually test every method twice: first using the same SqlServer.testsettings used by the developers, and then using the Oracle.testsettings.
This way, we can setup our test assemblies' app.configs to redirect the connectionStrings node to an external file, like this:
<configuration>
<connectionStrings configSource="ConnectionStrings.config"/>
...
When the tests are run, mstest copies the adequate ConnectionStrings.config file to the test's working folder, based on which .testsettings was used to initiate the run.
This was working fine until today, when I discovered that tests started through Microsoft Test Manager ignore the Visual Studio .testsettings files. Now I'm trying to run these same tests in our lab environment but the ConnectionStrings.config files are not deployed (understandably) and the tests fail.
How can we achieve this without using .testsettings files? After having huge headaches trying to setup oracle correctly in our new x64 build server, we disabled Oracle tests in the build definition. Now that we started setting up our lab environment, we thought about having one of the machines in it configured with our whole system using Oracle, enabling us to again run these 'unit tests' with oracle-specific connection strings to validate our queries. At the same time, we want to keep testing everything locally and on the build server using SqlServer also.
I think using [DeploymentItem] in this case is impossible, since it is meant for static files and not selectable, dynamic ones like our current setup.
Is there any equivalent to the .testsettings deployment process that we could use with TestCases inside MTM/Lab Env? On the Properties tab for our TestPlan, I can see the Automated Runs -> Test Settings option, but that only seems to allow deployment by specifying absolute paths (which will actually be resolved on the target machines). Is there a way to specify a relative path there, pointing to our ConnectionStrings.config files checked in on TFS? Maybe yet another alternative exists that I'm missing, perhaps using multiple build configurations?
Create separate build configurations for each of the server types by going into Configuration Manager and click New under Active solution configurations. Edit the project file and do something like this:
<PropertyGroup Condition="'$(Configuration)' == 'Oracle'">
<appConfig>App.Oracle.Config</AppConfig>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'SQL'">
<appConfig>App.SQL.Config</AppConfig>
</PropertyGroup>
Then ensure you have the correct connection strings in each of the config files. You can then configure TFS to build using those build configurations.
More info on using PropertyGroup and Condition, MSBuild Configurations and MSBuild project properties
I have started using the preview of Microsoft Team Foundation Service (TFS in the cloud, henceforth TFService) for a small project, and I'm currently setting up builds using the online build service included with TFService.
What I want to do is to add an installer of some kind. I've previously worked with InstallShield Limited Edition, WIX and Inno Setup and would like to keep using one of those if possible.
I've previously integrated Inno Setup as part of a build process (TFS 2010). This involved installing Inno Setup on the build computer, and adding a custom build task for running an inno setup script. The last part should be possible with TFService as well, because it's possible to create custom build process templates.
However, I realize that installing anything such as Inno Setup or InstallShield will not work with TFService, since it's not possible to install any 3rd party software on the build computer (it's just a cloud service running on some unknown virtual computer which I cannot access).
So my question is; is there a way to automatically create an installer as part of a build process running on TFService? For example, is the build service capable of building installshield projects out of the box (there's a license included with Visual Studio after all)? Or are there other ways to do this?
I have some experience with this trying to get WiX and InstallShield to work with Microsoft TFS Preview cloud service using their managed build agents. On these agents, you don't have administrator rights and you can't install software.
This currently rules out InstallShield which must be installed.
It is however possible to check the WiX binaries into source control and pull them down as part of your build.
WiX uses .wixproj files (MSBuild) to define their project compile activities. This references a targets file and other properties ( referencing registry values ) that won't exist when you deploy this way. A small bit of hacking will get all of these properties to resolve to workable values.
The one problem you may still have though (and I'm thinking TFS managed build environment ) is that you may have to configure your projects to skip MSI ICE validation suites. On the build machines, I played on the windows installer service was outright disabled and this prevented the tests from running.