Azure DevOps VsTest multi-agent parallel - unit-testing

I use Azure DevOps Server 2020 with self hosted agents and created a CI pipeline which executes all tests in parallel on one agent. The ~5000 tests (no UI tests) need around 7min to complete. That's way to slow for our needs, so to speed things up I added 3 agents, put the VsTest task into another job in the same pipeline with parallel: 4. All 4 agents first download the build artifacts and run a slice of the tests. Unfortunately this actually made it worse. Now the test run needs around 8 min on each agent.
My vstest yaml for 1 agent
- task: VSTest#2
displayName: 'Run tests'
inputs:
testSelector: 'testAssemblies'
testAssemblyVer2:
**\*test*.dll
!**\*TestAdapter.dll
!**\*TestFramework.dll
!**\obj\**
searchFolder: '$(System.ArtifactsDirectory)'
runInParallel: true
codeCoverageEnabled: false
rerunFailedTests: false
My vstest yaml for 4 agents
- task: VSTest#2
displayName: 'Run tests'
inputs:
testSelector: 'testAssemblies'
testAssemblyVer2:
**\*test*.dll
!**\*TestAdapter.dll
!**\*TestFramework.dll
!**\obj\**
searchFolder: '$(System.ArtifactsDirectory)'
runInParallel: true
codeCoverageEnabled: false
distributionBatchType: 'basedOnExecutionTime'
rerunFailedTests: false
I even tried batching by assembly and based on number of tests + number of agents, test run time still sits at ~8min.
Comparing this to our old UI based CI pipeline, with multi-config and multiplier on a variable with 4 TestCategories, which runs even more tests ~10000 (including the 5000 of the new pipeline) but these are distributed by TestCategory on the same 4 agents (Cat1 on agent1, Cat2 on agent2 and so on), the agents average on ~5min each.
The yaml of the UI based one looks like this:
steps:
- task: VSTest#2
displayName: 'Run tests'
inputs:
searchFolder: '$(Build.BinariesDirectory)'
testFiltercriteria: 'TestCategory=$(Tests)'
runInParallel: true
codeCoverageEnabled: false
I think I have to be missing something obvious.
Thanks in advance!
Edit 1:
I connected to my agents with RDP and inside the task manager there are multiple instances of testhost.x86 running, up to 8 simultaneously but not constantly. If I start my tests locally the 8+ instances of testhost.x86 are up almost all the time and rarely vanish at all. If that's any help.

Related

How to Have 2 Code Coverages in Gitlab Repo Badges

My team has a gitlab repo. It has two parts: an NPM package under projects folder and an angular application under src folder. So there are 2 projects in the angular.json file.
We currently have unit tests with coverage setup in our gitlab pipes. The issue is, since we have 2 projects in this repo, we really need to show the coverage for each project.
I noticed in demo image of the gitlab badges documentation (https://docs.gitlab.com/ee/user/project/badges.html), they have a 'JS Coverage' badge. This seems to be a custom badge (I can't find a list of given badges, but I'm not finding anything for 'JS coverage', so I'm assuming it's custom).
So I think I can do something like that to create 2 custom badges that has the code coverage of each project (1 for 'Pkg Coverage' and 1 for 'App Coverage'). But (TBH) the documentation around creating custom badges isn't great. I need to know how to store this custom value to use in the badge, and how to update in the gitlab pipe.
Does anyone know how to achieve this? If I could just figure out how that example is using 'JS Coverage' (and how to update the value in the pipe), then I could figure out what I need to do for my 2 custom badges. Any tips?
Some details, right now we have a gitlab job like this (it runs unit tests and updates the coverage values. Since 'ng test' runs the tests of both projects 1 by 1, the code coverage of the 1st project is saved to the 'coverage' value):
unit-tests:
stage: test
rules:
# Run unit tests, including when merge requests are merged to default branch (so coverage % is updated)
- when: on_success
image: trion/ng-cli-karma:$ANGULAR_VERSION
before_script:
- *angular-env-setup-script
coverage: '/Statements \W+: (\d+\.\d+)%.*/'
script:
- npm run build:ds-prod
- npm install dist/ds
- ng test --code-coverage --progress false --watch false
artifacts:
expose_as: "Coverage Report"
paths:
- coverage/
tags:
- kubernetes-runner

How to publish 2 sets of tests results on Azure Devop?

We have a git repository that contains .Net code(backend) and since recently typescript code(angular, front-end).
When we added the angular test execution, it appears that the initial .net tests are not correctly published. From my test, it seems that only the last publish is conserved.
How can we keep both of them? It's important because it's used in a PR and we don't want ton miss anything.
Here is how we publish the .Net tests:
- task: PublishTestResults#2
inputs:
testResultsFormat: 'NUnit'
testResultsFiles: '**/TEST-*.xml'
searchFolder: '$(System.DefaultWorkingDirectory)/testResults'
failTaskOnFailedTests: true
testRunTitle: '.Net tests'
buildPlatform: 'Any CPU'
buildConfiguration: 'Debug'
and here how we publish the Angular tests:
- task: PublishTestResults#2
displayName: 'Publish Angular test results'
condition: succeededOrFailed()
inputs:
searchFolder: $(System.DefaultWorkingDirectory)/angular-test/results
testRunTitle: Angular
testResultsFormat: JUnit
testResultsFiles: '**/TESTS*.xml'
How can we ensure both tests results are considered in Azure Devop:
(here you see only the angular tests and its (few) tests. You also see there has been two tests run).
I taught about doing only once the PublishTestResults task, but since they have different format(.net is NUnit, while angular is JUnit), it will not work.

Can I build app in CodeBuild only once, and then run parallel Cypress tests on it using a build-matrix?

I have been following this official documentation on how to get parallel builds running in AWS CodeBuild using a batch matrix. Right now my buildspec.yml is structured like this:
## buildspec.yml
version: 0.2
batch:
fast-fail: false
build-matrix:
dynamic:
env:
variables:
INSTANCES:
- A
WORKERS:
- 1
- 2
phases:
install:
commands:
- npm ci
build:
commands:
- npx cypress run <params>
In this example we run two parallel workers, though IRL we run 11.
This works well for one use case, where we check out the code and run the Cypress tests against the pre-defined URL of one of our test environments. However, we have another use-case where we need to build the application within the CodeBuild container, start a server on localhost, and then run the Cypress tests against that.
One option, of course, is just to build the app 11 times. However, since CodeBuild pricing is by the machine minute, I'd rather build once instead of 11 times. I also don't like the idea of technically testing 11 different builds (albeit all built off the same commit).
What I'm looking for is behavior similar to Docker's multi-stage build functionality, where you can build the app once in one environment, and then copy that artifact to 11 separate envs, where the parallel tests will then run. Is functionality like this going to be possible within CodeBuild itself, or will I have to do something like have two CodeBuild builds and upload the artifact to S3? Any and all ideas welcome.

Why does only one GitHub self-hosted runner accept the new job?

I have three Ubuntu Pcs, which have their own GitHub self-hosted runner. Two of the runners (on PC 1, and PC 2) are labeled test, the third (PC 3) is labeled production. In addition, all runners are labelled self-hosted
On GitHub, I have three branches, dev, test and production. The goal is when I merge a pull request onto test or production branches, all the runners with the targeted label will pull the new version, then build and compose Docker image/ container.
This procedure works on PC 1 and PC 3, merging on test and production branch respectively. However, the runner on PC 2 remains idle when PC 1 runs the "test job", immediately after merging the test pull request.
I have double checked that the runners on PC 1 and PC2 has the same labels. What am I doing wrong, or not understanding properly? Do I have to create a workflow file for each PC?
Here is the workflow file test.yml
name: Test
on:
# Triggers the workflow on push to test branch.
push:
branches: [ test ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
build:
runs-on: [self-hosted, test]
- uses: actions/checkout#v2
with:
ref: test
# Pulling latest code from Github
- name: Pull, build, compose up
run: |
docker build -f Dockerfile -t test-1 .
docker-compose up -d

Azure Pipelines build all projects individually before running tests

We have an application that has a number of projects isolated in their own solutions, each with their own UnitTest and IntegrationTest projects within those solutions. What happens on our locally hosted Azure DevOps applications is that the following code forces Azure DevOps to build each project in the solution before running tests. What I'd like to do is to run all tests sequentially on an initial build or at least cut the build time down because on the build server each build takes about a minute or 2 which is the bulk of the time. Since we have XUnit running the tests in say Rider it processes all tests across a solution from multiple projects well within a minute.
Is there a way to cut the build time or is this as good as it gets?
- task: DotNetCoreCLI#2
displayName: Unit Tests
inputs:
command: test
projects: '**/*UnitTest*/*.csproj'
arguments: '--configuration $(BuildConfiguration)'
# run integration tests
- task: DotNetCoreCLI#2
displayName: Integration Tests
inputs:
command: test
projects: '**/*IntegrationTest*/*.csproj'
arguments: '--configuration $(BuildConfiguration)'
What happens on our locally hosted azure devops application is that
the following code below will cause Azure Devops to build each project
in the solution before running tests.
For this issue , you can add --no-build argument to skip the project build on test run.
--no-build:
Doesn't build the test project before running it. This is listed in the Options part of document.
- task: DotNetCoreCLI#2
displayName: 'dotnet test'
inputs:
command: test
projects: '**/*UnitTest*/*.csproj'
arguments: '--configuration $(BuildConfiguration) --no-build'
Here is a case with similar issue , you can refer to it.