How can I properly parallellize my test suites in TeamCity? - unit-testing

So I have a scenario whereby I have many different test suites. They are all triggered by a Create Test Environment step. However, these test suites cannot run concurrently on the same environment, as they would interfere with each other. To alleviate this, I added a shared resource in TeamCity and configured the build definitions to block on this resource, so that only one test suite runs at a time. This works.
However, if while the test suites for Environment A are running, another code change is checked in, Environment B can be created by the Create Test Environment step, and all the test suites are re-queued. Currently, due to the fact that they all share a shared resource that they block on, those tests then sit in the queue awaiting access to the shared resource. However, there is no reason that the tests for Environment B cannot run (one build at a time) in parallel with the tests for Environment A. How can I best tweak my TeamCity configuration to achieve this?

It seems that you are looking for Matrix builds. This feature is not implemented in TeamCity. As workaround you can create different build configurations for different environments. You can use TeamCity templates to simplify the setup. For more details see the related comment in the TeamCity issue tracker.

Related

Using Ant as a continuous testing tool

So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.

How to Deploy Data in an SSDT Unit Test but Only for the Unit Test

I have successfully begun to write SSDT unit tests for my recent stored procedure changes. One thing I've noticed is that I wind up creating similar test data for many of the tests. This suggests to me that I should create a set of test data during deployment, as a post-deploy step. This data would then be available to all subsequent tests, and there would be less need for lengthy creation of pre-test scripts. Data which is unique to a given unit test would remain in pre-test scripts.
The problem is that the post-deploy script would run not only during deployment for unit tests, but also during deployment to a real environment. Is there a way to make the post-deploy step (or parts of it) run only during the deployment for an SSDT unit test?
I have seen that the test settings in the app.config include the database project configuration to deploy. But I don't see how to cause different configurations to use different SQLCMD variables.
I also see that we can set different SQLCMD variables in the publish profiles, but I don't see how the unit test settings in app.config can reference different publish profiles.
You could use an IF-statement, checking ##SERVERNAME and only running your Unit Testing code on the Unit Test server(s), with the same type of test for the other environments.
Alternatively you could make use of the Build number in your TFS Build Definition. If the Build contains, for example the substring 'test', you could execute the the test-code, otherwise not. Then you make sure to set an appropriate build number in all your builds.

Rails app, Continuous Integration/Deployment Environments

When developing, my team obviously uses development as our environment.
When we run automated tests, we use testing.
We also have staging and production environments, respectively used for our testers to check out features and the final "live" product.
We're trying to setup an internal CI server to run our automated tests against and to eventually assist with automated deployments.
Since the CI server is really running automated tests, some think it should be run in testing environment. However, in order for the CI server to actually be useful, my thoughts are that it needs to be run in production mode with as close-as-possible a mirror of the actual production environment (without touching the production DB, obviously).
Is there an accepted environment that a CI server should be executed under? production environment (with different DB) seems the only logical answer to me, but I may be missing something...
Running any tests on PROD environment as you said
seems the only logical answer
but is not quite true. There are risks that your tests can seriously damage the actual environment/application to a point where you'll face a recovery option. After all the dark side of testing is to show/find that your software has not only minor bugs and it is working not as expected.
I can think of at least these 'why not test production' considerations:
when the product is launched, the customer rely on it. Expecting that your software is working ()being already tested). Your live environment should do its job and not be loaded with tests. If the product misbehaved (or did not perform), the technical team have to be sent to to cover the damage, fix the gaps and make it run hassle free. Now this not only affected the product cost, but delayed the project deadlines in a major way. This will make a recursive effect at the vendors profits and next few projects.
the production or development team when completes a product development at their end, have to produce this test environment for testing team prior to loading their newly developed product on that environment for testing.
To me, no matter that you
also have staging and production environments
it is essential to use the Test one accordingly. Further more Testing environment should be (configured) as close as it gets to the Production. Also one person could be trying to test while another person breaks the thing that he has been testing. With out the two being separate their is no way to do proper testing.
Just to be full answer, your STAGE environment can have different roles depending on the company.
One is that it can be the QA/STAGE environment that has an exact copy of production which is used for both QA and system testing (testing of the system when a lot of updates/changes or upgrade is going to go into production).
UPDATE:
That was my point too. The QA environment should be a mirror of the PROD. Possible solution about your issue with caching/pre-loading files onto staging/production is creation of pre-/post-steps .bat (let's assume) files.
In our current Test project we use this approach. In pre-steps we set-up files needed for test execution (like removing files from previous runs and downloading latest copies/artifacts). In post-steps we set up reporting files needed.The advantage is that your files will be collected and sync before every execution.
About the
not on the same physical hardware
in my case we support dedicated remote Test server. Advantages are clear, only thing that you need to be considered is that it'll require maintenance (administration).

Best practices (unit) testing Windows Azure

Within a short-time period I'm going to start a project based on Windows Azure. And I was wondering what are the experiences with testing for Windows Azure projects (in continuous intergration (with a TFS build server))? (Eventual using TDD)
Some things I was wondering:
Do you use mocking (in your own written wrapper class)?
Do you use the storage emulator?
Do you deploy the services to Azure and run the tests from the build server to the cloud? (what about costs)?
Thnaks in advance!
The same good practices for writing unit tests for applications outside of Windows Azure apply. If you have an external dependency to what you are actually testing, that dependency should be mocked and injected for your granular unit test.
For example, when I'm using Windows Azure Storage Queues I will have an interface that I use to interact with the queue itself, so in my code consuming the queue service I can mock the subsystem using the interface and use dependency injection to inject the mock. This removes the necessity to actually deal with the emulator during unit tests. For the most part the actual concrete implementation of the code working with the queue is not much more than a very thin wrapper.
I personally don't shoot for 100% test coverage, so I may not have direct unit tests that utilize the concrete implementation of the wrappers. In many cases I try to have integration tests that will exercise these wrappers and exercise multiple aspects of the system working together. In some cases I can run the integration tests in the emulator (for Storage operations for example), but in some cases they simply have to be run with access to the Windows Azure environment (in the case of usage of ACS or Service Bus).
Ideally you'd like to have a set of scripts that can be run to spin up a minimum set of test servers in Azure, deploy your solution and exercise the integration tests that can't be done on premises. Then get the results of that and have the script shut everything down (or optionally leave it running if you need that). Then run the integration tests suite that utilizes these scripts often enough to detect issues, but you certainly don't need to run them every time you check something in unless you are happy with running the test environment all the time. If you okay with the cost of a semi-permanent test environment running in Azure then just make sure to have the scripts to an update deployment rather than a delete and redeploy to cut down on cost a bit (savings would be relative to how often the deploy occurs).
I believe this question is a very subjective one as you're likely to get several different opinions.

Build (CI) server for embedded softtware

I would like to ask about your experience with build server for embedded systems. What are you using (if any), and what are good and bad sides.
We are developing mainly for microcontrollers without operating system.
At this moment I'm trying to use Jenkins and my build is running. But I have some problem with projects structure. When I want all plugins working, than I need flat job structure. But we have few projects that are developed in parallel, and then job view start to be messy.
I've tried folders, but than some plugins stopped working.
I would like to build a pipeline, that is running sequential, but have parallel jobs inside. eg. Commit stage have: compile, lint check, style check, unit tests. all of them can run in parallel and when all are successful next stage is executed.
What I need from Build server at this moment:
build pipeline support
user authorization based on LDAP
parallel job execution
hierarchical projects (projects/configurations groups)
reports from xUnit, Lint, Compiler warnings, Robot framework.
slave/agents support, tags for slave
privileges based on ldap groups
privileges per group/project
I'm opened for any suggestions, open source and commercial.
I was looking at Bamboo on videos look very nice but I didn't try it yet.
We have two development teams, that are developing different projects. It could be nice to have projects grouped for teams and privileges for group. Members of one group shouldn't modify builds of other. But it is more "nice to have" than "must have".
TeamCity
I tried to use TeamCity. Building build pipeline is easier than in Jenkins, just click add Step.
One thing that I found difficult is making steps in parallel in one configuration. For example after commit I would like to run in parallel Lint, Unit tests, Compile to save some time. I found solution, but it make pipeline harder to view and maintain.
TeamCity support multiple configuration in projects which solve problem with jobs grouping. I didn't found option to group projects.
TeamCity is a free, Java-based CI server from JetBrains. We've been using it very successfully (for very different kinds of projects) and I would unreservedly recommend it to you. To each of your requirements:
Build pipelines are configured as a series of steps within a build configuration. A project can have an arbitrary number of configurations, which in turn can have an arbitrary number of steps.
LDAP integration is fully supported.
Build pipelines can be executed in parallel. TeamCity delegates work to Build Agents, which are typically distinct servers that have all the necessary tools (frameworks, etc.) to perform the steps of a build configuration. The free version of TeamCity comes with licenses for three agents, so you could have up to three builds running in parallel. Additional agents can be licensed for a nominal fee.
By 'hierarchical projects' I understand you to mean that the completion of one build pipeline will automatically trigger the start of a subsequent pipeline. This is supported, and build/version numbers can be passed between the stages for consistency.
XUnit has first-class support. Lint/compiler reports can be saved as 'artifacts' of the build for easy review later. Essentially, a lot of frameworks have built-in support in TeamCity, and for everything else you can execute arbitrary shell commands, the output of which can be saved as artifacts or used in subsequent build steps.
Slave/agent support is central to the TeamCity model, as noted above.
All of this is highly configurable and customizable. We've been able to do a lot of diverse, complex things with TeamCity, and it has been totally solid and stable for us. And it looks good, too -- the server dashboard arranges information in an easily-understood way.
Disclaimer: I work for Atlassian so I'm a bit biased.
Configuring your build pipeline in Bamboo is pretty easy to do. Bamboo operates based on a Plan → Stage → Job structure, listed from higher to lower order. Check out the Bamboo Plan Structure.
Every Project in Bamboo holds a collection of Plans. Plans are comprised of one or more Stages. Stages run sequentially and are comprised of one or more Jobs. Jobs run in parallel and are comprised of one or more Tasks (tasks run sequentially, but can be placed in separate Jobs so that they run in parallel and speed up build time). Agents in Bamboo are machines or services that perform your build steps. An entire Job will execute on a single Agent. You can read more about Agents here. As for slave tags, the ability to make certain agents exclusively tied to certain builds or projects is on the short-list for new features.
To answer your other points:
user authorization based on LDAP/privileges based on ldap groups/project: You can connect to an external LDAP server to manage users and permissions. Bamboo has a groups feature or if your team is using JIRA you can take advantage of JIRA groups to set up global permissions, plan permissions, and also indicate which users will receive notifications on a plan’s build results. Global permissions control who has access to build plans and the Bamboo server whereas Plan permissions control who can perform specific operations on a Plan and its Jobs.
hierarchical projects (projects/configurations groups): Bamboo does support parent & child plan structure. There are several ways you can set up triggering for builds. One of them is to base triggers on other builds, that is, Plan builds are triggered by preceding successful builds of other plans or if other specified plans are building successfully. Example: If Plan A builds successfully it will automatically trigger builds of Plans B & C.
reports from xUnit, Lint, Compiler warnings, Robot framework: Bamboo can run any build process that can be started from a command line. Support includes Maven/Maven2, Ant, make, MSBuild, NAnt, Grails, devenv.exe, and any xUnix-compiant framework (JUnit, Selenium, JWebUnit, NUnit, PHPUnit, etc...).