How does TeamCity know when an xUnit.net test is run? - unit-testing

I have always wondered how TeamCity recognizes that it is running xUnit.net tests and how it knows to put a separate "Test" tab in the build overview after a build step runs. Is the xUnit console runner somehow responsible for that?

Found finally what is actually going on. TeamCity has its own API. I dug this code snippet out of the xUnit source code and it becomes clear:
https://github.com/xunit/xunit/blob/v1/src/xunit.console/RunnerCallbacks/TeamCityRunnerCallback.cs
public override void AssemblyStart(TestAssembly testAssembly)
{
Console.WriteLine(
"##teamcity[testSuiteStarted name='{0}']",
Escape(Path.GetFileName(testAssembly.AssemblyFilename))
);
}
...code omitted for clarity

Related

How to unit test gradle task?

I want to test logic of my build.gradle script.
Excerpt of the script would be:
(...other tasks and methods...)
def readCustomerFile(File file) {
def schema = <prepare schema>
def report = schema.validate(JsonLoader.fromFile(file))
if (!report.success) {
throw new GradleException("File is not valid! " + report.toString())
}
return new groovy.json.JsonSlurper().parse(file)
}
task readFiles {
mustRunAfter 'prepareCustomerProject'
doLast {
if (System.env.CUSTOMER_FILE_OVERRIDE) {
project.ext.customerFileData = readCustomerFile(System.env.CUSTOMER_FILE_OVERRIDE)
}
else if (customerFile.exists()) {
project.ext.customerFileData = readCustomerFile(customerFile)
}
else {
throw new GradleException("Customer File is not provided! It is expected to be in CUSTOMER_FILE_OVERRIDE variable or in ${customerFile}")
}
}
}
(...other tasks and methods...)
I would like to test both method and task itself.
The 'prepareProject' task is quite lengthy in execution, but in 'real' setup it does magic necessary to set properties necessary for not only task above.
For testing I only want to e.g. set run readFiles task and validate results, making sure that either property on project was correctly set or exception was thrown.
I have looked into gradle test kit, but it is not what I need, as I was unable to find anything that would allow me to e.g. inspect project.
I have seen Guide for Testing Gradle Scripts, but this post is quite old and does not address my need / problem. I have also had a look at gradle docs Testing Build Logic with TestKit, but looking GradleRunner does not seem to offer any real inspection or project preparing abilities.
Plus, it would make us use jUnit, effectively adding whole classes structure only for testing purposes. Not clean and hard to maintain.
Googling gradle + test + task and other variations finds tons of ways of running xUnit tests, but that's not what I need here.
Summarizing, what I need is:
test gradle tasks and methods from build.gradle in separation (test kit will run task with all its dependencies, I don't want this)
prepare project before test run (test kit does not seem to allow this)
verify task / method output
Has anyone successfully done this?
Or am I approaching this in a wrong way?
I'm fairly new to gradle, searching for good options to test my build scripts.

Disable Unit Test MSTest

I have been tasked with repairing our decrepid unit test framework and I'm simply trying to disable a few failing tests, but I don't know how to do this in code. In C#, it's as simple as adding the [Ignore] attribute and, in C++, I figured out how to disable all of them for a particular class, but I want to do it with specific tests as well:
BEGIN_TEST_CLASS_ATTRIBUTE()
TEST_CLASS_ATTRIBUTE(L"Ignore", L"true")
END_TEST_CLASS_ATTRIBUTE()
Does anyone know how to disable a specific unit test in a source file in C++ using the MSTest framework? Thanks in advance, Google has not been of much help!
You can do this:
BEGIN_TEST_METHOD_ATTRIBUTE(Test_Name)
TEST_METHOD_ATTRIBUTE(L"Ignore", L"true")
END_TEST_METHOD_ATTRIBUTE()
TEST_METHOD(Test_Name)
{
// code
}
Or this:
BEGIN_TEST_METHOD_ATTRIBUTE(Test_Name)
TEST_IGNORE()
END_TEST_METHOD_ATTRIBUTE()
TEST_METHOD(Test_Name)
{
// code
}
Check More here

Separating unit tests and integration tests in Go

Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.

Arbitrary unit tests fail in VS2012 after switching from Moles to Fakes

I have about 300 unit tests for an assembly which is part of a solution I originally started under VS2010. Numerous tests used the Moles framework provided by Micrsoft, but after upgrading to VS2012 (Update 2) I wanted to change the tests to use the officially supplied Fakes framework.
I updated the corresponding tests accordingly, which usually only involved creating a ShimsContext and some minor changes to the code:
Before
[TestMethod]
[HostType( "Moles" )]
public void MyUnitTest_CalledWithXyz_ThrowsException()
{
// Arrange
...
MGroupPrincipal.FindByIdentityPrincipalContextIdentityTypeString =
( t1, t2, t3 ) => null;
...
try
{
// Act
...
}
catch( Exception ex )
{
// Assert
...
}
}
After
[TestMethod]
public void MyUnitTest_CalledWithXyz_ThrowsException()
{
using( ShimsContext.Create() )
{
// Arrange
...
ShimGroupPrincipal.FindByIdentityPrincipalContextIdentityTypeString =
( t1, t2, t3 ) => null;
try
{
// Act
...
}
catch( Exception ex )
{
// Assert
...
}
}
}
I've got different test classes in my test project, and when I run the tests I get arbitrary erros which I cannot explain, e.g.:
Run tests for one class in Release mode => 21 tests fail / 15 pass
Run tests for same class in Debug mode => 2 tests fail / 34 pass
Run tests for same class again in Release mode => 2 tests fail / 34 pass
Run all tests in the project => 21 tests fail / 15 pass (for the class mentioned above)
Same behaviour for a colleague on his system. The error messages are always TypeLoadExceptions such as
Test method ... threw exception: System.TypeLoadException: Could not load type 'System.DirectoryServices.Fakes.ShimDirectorySearcher' in the assembly 'System.DirectoryServices.4.0.0.0.Fakes,Version=4.0.0.0, Culture=neutral, PublicKeyToken=..."
In VS2012 itself the source code editor doesn't show any errors, Intellisense works as expected, mouse tooltips over e.g. ShimDirectorySearcher show where it is located etc. Furthermore, when I open the Fakes assembly that's being generated (e.g. System.DirectoryServices.4.0.0.0.Fakes.dll) with .NET Reflector, the type shown in the error message exists.
All the tests worked fine (in Debug and Release mode as well) before we switched from VS2010 to VS2012, but now we don't have a clue what's wrong here. Why does the result change in the ways described above? Why do we get TypeLoadExceptions even though the types do exist?
Unfortunately there is hardly any help available from Micrsoft or on the internet.
I don't quite understand why having the old .testsettings file from VS2010 is such a problem, but deleting it and adding a .runsettings file as suggested by MSDN did the job for me.
All problems were solved:
All unit tests run (again) without problems
Arbitrary combinations of tests run (again) without problems
I can debug tests using Fakes (before I used to get test instrumentalisation errors)
Hope this helps others who run into problems, there doesn't seem to be too much information about Fakes yet.
One more thing regarding Code Coverage: This works (without having to configure any test settings) via menu Test => Analyze Code Coverage. For TFS build definitions you can enable code coverage for the build by choosing Process => Basic => Automated Tests => 1. Test Source. Now click into the corresponding text field and then on the ... button that is (only) shown when you click into the text field. Now choose Visual Studio Test Runner in the Test runner ComboBox. Now you can also choose Enable Code Coverage from the options.

Resharper running all tests when only a single one is selected

I'm using Resharper 4.5 with Visual Studio 2008 and MBUnit testing, and there seems to be something odd with using ReSharpher to run the tests.
On the side there are the icons beside the class each test method with the options Run and Debug. When I select Run it just shows me the results of the single test. However I noticed that the test was taking a considerably long time to run.
When I ran Sql Server profiler and start stepping through the code, I realized that its not just running the selected test, but every single one in the class. Is there any reason it makes it look like its only running one unit test while actually running them all?
Its getting to be a pain waiting for all integration tests to run when I only care about the reuslt of one, is there any way to change this?
I just encountered this today and I think I might have realized what causes this bug, I had my methods named similarly
[TestMethod]
public void TestSomething()
[TestMethod]
public void TestSomethingPart2()
I saw that running TestSomething() would run both, however running TestSomethingPart2() would not. I concluded if you name methods that an exact match can occur for the method name it will run the test. After renaming my second test to TestPart2Something this issue went away.
I can confirm that this is a problem with ReSharper 5.1.
To reproduce run test A from my sample code below (all tests will execute); run test AB (all except A will execute); etc:
[TestMethod]
public void A()
{
Console.WriteLine("A");
}
[TestMethod]
public void AB()
{
Console.WriteLine("AB");
}
[TestMethod]
public void ABC()
{
Console.WriteLine("ABC");
}
[TestMethod]
public void ABCD()
{
Console.WriteLine("ABCD");
}
[TestMethod]
public void ABCDE()
{
Console.WriteLine("ABCDE");
}
It took me ages to work this out. I had the remote debugger attached to a development server, and it was breaking a bit more often than I was expecting it to...
It seems to be doing a StartsWith instead of a Contains as others have said.
The workaround is to not have test method names that start with the name of another test method name.
I hope this shows up under Chris post.
I had a similar situation that confirms the behavior he noticed.
[TestMethod()]
public void ArchiveAccountTest()
[TestMethod()]
public void ArchiveAccountTestRestore()
So running the first method would execute both and running the second would not. Renamed my second method to TestRestore and the problem went away.
Note: I'm using Resharper 5.1 so it's still a problem.
When you right-click in the editor, the context menu appears from which you can run and debug tests. Right-click inside a test method to run or debug that single test. Right-click outside of any test method to run or debug the entire test class contained in the current file.
The current release of Gallio includes a Unit Test runner with MbUnit (and NUnit) support built-in.
From the Resharper menu, you have the option of running a Single unit test or all Tests in your solution. What is cool, is that the Keyboard-shortcuts for this are:
Alt + R, U, R - Run test from current context (if you are at a [Test] level, it runs one test, if you are at a [TestFixture] level, it runs all in the fixture!)
Alt + R, U, N - Runs all Unit Tests in your Solution
I highly recommend that you uninstall your current Gallio and then check C:\Program Files\Jetbrains\Resharper\plugins\bin and clear out and files there. Then install Gallio afresh.
Once you've done this, you should startup VS2008 and goto at the Resharper | Plugins menu to check that the Gallio plugin is active. This will give you support for MbUnit.