How to force parallelization of tests on Azure+Nunit? - unit-testing

We are setuping our current solution on azure devops.
We have a big bunch of tests(some unit tests, some integrations tests) all running with NUnit.
We have configured the test like this:
- task: VSTest#2
timeoutInMinutes: 600
inputs:
testSelector: 'testAssemblies'
testAssemblyVer2: '.Test.dll'
searchFolder: '$(System.DefaultWorkingDirectory)/$(buildConfiguration)'
codeCoverageEnabled: false
platform: 'Any CPU'
configuration: '$(buildConfiguration)'
rerunFailedTests: false
pathtoCustomTestAdapters: 'Solution/packages/NUnit3TestAdapter.3.12.0/build/net35'
minimumExpectedTests: 1000
runInParallel: true
distributionBatchType: basedOnAssembly
failOnMinTestsNotRun: true
resultsFolder: 'testResults'
It's working, but it's taking ages(I'm speaking of 5+ hours), and I'm searching for way to speed things up.
One weird thing: Even with the runInParallel: true, there is only one process(testhost) running, and consuming like 3-6% of the CPU.
What should I do to hurt my system CPU/hard disk/... and improve my test speeds?

Related

How can I skip running slow tests when running a particular test suite in PHPUnit, but still run all tests when I need full code coverage?

I have a single test suite that is marked at so in my PHPUnit configuration. The test suite contains various many tests, and it also has database-intensive live database tests, which take a long time to complete.
In particular one of the tests takes over 2 seconds to complete (see below).
I want to separate running fast tests form slow tests, so that I can run the full slow but complete version of tests when I have more time, but in general I want to run the fast tests for my every-day needs, thereby omitting the slow tests when running the test suit.
How can I do this?
For the record, my phpunit.xml config is like so:
<phpunit bootstrap="bootstrap.php">
<testsuite name="Crating">
<directory>../module/Crating/test/</directory>
</testsuite>
</phpunit>
Command I use to run my test suite is like so:
phpunit -c phpunit.xml --testsuite CratingCalc
One of the files in my ../module/Crating/test/ directory is CrateRepositoryTest.php. It looks like so:
class CrateRepositoryTest extends TestCase
{
function testCombine()
{
//mocked up hardcoded data
$fake = new FakeCratingDataModel();
//connection to real live database
$real = new CratingDataModel();
/*
* Tests that verify mocked up data to match live data
* Purpose to have them is to alert me when live database data or schema change
*/
$this->assertEquals($fake->getContentsBySalesOrderNumber(7777), $real->getContentsBySalesOrderNumber(7777));
$this->assertEquals($fake->getContentsByShopJobNumber(17167), $real->getContentsByShopJobNumber(17167));
$this->assertEquals($fake->getNearCrating(20, 20, 20), $real->getNearCrating(20, 20, 20));
$this->assertEquals($fake->getContentsByInquiryNumber(640, 2), $real->getContentsByInquiryNumber(25640, 2));
}
}
Groups.
Normally, you can add annotations #group small or I have #group ci (just for things I'll run in a full CI environment).
Having small, medium or large tests is in fact so common, there are dedicated group annotations - #small, #medium & #large, and there are also settings for the phpunit.xml file that can also give a time-limit for each (and will kill, and fail them, if they take too long):
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
....
timeoutForLargeTests="5"
timeoutForMediumTests="2"
timeoutForSmallTests="1"
.... >
I have two ways to run my tests - the full version that does not exclude any groups (running over 1250 tests takes around 50 seconds, without coverage), and the faster tests that adds --exclude-group ci,large,webtest to the phpunit command, that can run 630 of the tests in less than 4 seconds.

Jenkins test results behaviour

I am new to Jenkins and of late I observed that in one of the job when i build application test results are changing even though new changes are not committed.
Out of curiousity I run that job multiple times.
So the test results are like this
1st build: test cases run:100 failed:0 errors:22
2nd build: test cases run:100 failed:0 errors:70
3rd build: test cases run:100 failed:0 errors:60
Build became unstable
Those test cases are negative test cases.
But why this is happening? Is this normal??
Note: we are using junit

Test Result Report in Jenkins Pipeline using Groovy | Pipeline Execution | Report Types

I am setting up test result reporting in Jenkins for test projects written in various test frameworks (NUnit, MSTest, etc) and would like to improve my understanding with regards to report types and the difference between stages and post in the pipeline execution.
Pipeline Execution
Stages are executed in the order in which they appear and if there are any stages after and the one before fails the following stages will not get executed.
Where post gets executed regardless of whether or not the stages completed successfully or not, after the stages execution.
Report Types
Provided I have a stage (produces test result):
stage('MSTest') {
steps {
bat(script: 'dotnet test "..\\TestsProject.csproj" --logger "trx;LogFileName=TestResult.xml"')
}
}
And a post that runs always (consume test result to produce test result report):
post {
always {
xunit testTimeMargin: '5000', thresholdMode: 1, thresholds: [], tools: [ReportType(deleteOutputFiles: true, failIfNotNew: false, pattern: '..\\TestResult.xml', skipNoTestFiles: false, stopProcessingIfError: false)]
}
}
Project variations:
Provided my test project is written in NUnit the 'ReportType' method in 'tools:' will need to be replaced with NUnit3 for the post to execute successfully.
Provided my test project is written in MSTest the 'ReportType' method in 'tools:' will need to be replaced with MSTest for the post to execute successfully.

intern-geezer never responding on BrowserStackTunnel

I am trying to get intern-geezer (so far tested on 2.0.1 and 2.1.1) to work so I can perform tests on IE8 as well.
Currently I am trying to run tests on BrowserStack but unfortunately the test script seems to freeze and never return a response, so I have to stop the process manually.
My configuration is:
test/simpleTest.js
define(['intern!object', 'intern/chai!assert'], function (registerSuite, assert) {
registerSuite({
name: 'simpleTest',
sum: function () { assert.strictEqual(2 + 2, 4, 'Should sum'); }
});
});
test/intern-geezer.js
define({
proxyPort: 9000,
proxyUrl: 'http://localhost:9000/',
capabilities: { 'selenium-version': '2.41.0' },
environments: [
{ browserName: 'internet explorer', version: '9', platform: 'WINDOWS' }
],
tunnel: 'BrowserStackTunnel',
suites: [ 'test/simpleTest' ],
excludeInstrumentation: /^(?:test|node_modules)\//
});
Then calling intern-runner
./node_modules/intern-geezer/bin/intern-runner.js config=test/intern-geezer.js
Listening on 0.0.0.0:9000
Starting tunnel...
BrowserStackLocal v3.3
Ready
Initialised internet explorer 9 on WINDOWS
Then it keeps there forever doesn't matter what the environment is.
when checking BrowserStack for Exceptions, it seems this is the last operation step to run before it freezes:
Get URL⇒ http://localhost:9000/__intern/client.html?config=test%2Fintern-geezer.js&reporters=webdriver&functionalSuites=undefined&suites=test%2FsimpleTest&baseUrl=%2F&sessionId=f07e36f7e73786173ee0cfa98feb7e4b9bff3e2c
Any ideas?
Those tests work fine when using master branch of intern
The config command-line argument is supposed to be a module ID, not a filename. In other words, the .js needs to be removed.

Codeception - Acceptance tests work but Functional test don't

I am running the latest version of Codeception on a WAMP platform - My acceptance is very basic however works fine (see below):
$I = new WebGuy($scenario);
$I->wantTo('Log in to the website');
$I->amOnPage('/auth/login');
$I->fillField('identity','admin#admin.com');
$I->fillField('password','password');
$I->click('Login');
In a nutshell - it checks the page is 'auth/login' fills out 2 form fields and clicks the login button. This works without any problems.
Here is my identical functional test:
$I = new TestGuy($scenario);
$I->wantTo('perform actions and see result');
$I->amOnPage('/auth/login');
$I->fillField('identity','admin#admin.com');
$I->fillField('password','password');
$I->click('Login');
When I run this from the command line I get the following error (not the full error but enough to understand the problem):
1) Couldn't <-[35;1mperform actions and see result<-
[0m in <-[37;1LoginCept.php<-[0m <-41;37mRuntimeException:
Call to undefined method TestGuy::amOnPage<-[0m.......
My Acceptance suite has 'PhpBrowser' & 'WebHelper' modules enabled, the Functional suite has 'FileSystem' & 'TestHelper' enabled (within the acceptance.suite.yml & functional.suite.yml files)
Obviously the amOnPage() function is the problem - however I am led to believe amOnPage() should work in acceptance and functional test? Or I am wrong - also - can someone explain what the numbers mean e.g '<-[35;1m' that appear
UPDATE: I tried adding the 'WebHelper' module to the functional.suite.yml but I do not see the amOnPage() being auto-generated in the TestGuy.php file - any ideas?
My config files are below:
WebGuy
class_name: WebGuy
modules:
enabled:
- PhpBrowser
- WebHelper
config:
PhpBrowser:
url: 'http://v3.localhost/'
TestGuy
class_name: TestGuy
modules:
enabled: [Filesystem, TestHelper, WebHelper]
Well, this is so, because of TestGuy don't have those methods. All of those methods are in the PhpBrowser, Selenium2 modules or other that inherits from Codeception Mink implementation. So you need to add PhpBrowser in your functional suite in modules section, and then run codecept build command.
Also note that it is better to use Selenium2 module for acceptance test and PhpBrowser for functional tests. The main idea is that acceptance(Selenium2) tests must cover those part of your application, that can not be covered by functional (PhpBrowser) tests, for example some js-interactions.
About '<-[35;1m' start script codecept run --no-colors to remove '<-[35;1m' from console output