I want to run selective iterations from my test file. Say, iterations 10-14. Is there a switch in Newman that I can use to give these inputs. The closest I came is to select the number of iterations to run for which the command is -n (Reference here). But this runs all iterations from 1 to n. I want to be able to change the starting and end values.
As #Danny mentioned There is no inbuilt way to specify a specific iteration row to be executed using Newman but can be done using Powershell or by using Newman as a library.
The approaches are as below
Powershell:
here we read the actual csv file in the current directory using import csv
Then considers only row 1..2 , if you just want 1 row you can change $a[1..2] to
$[1]
$a= Import-Csv .\a.csv
$a[1..2] | Select-Object * | export-csv -Path .\temp.csv -NoTypeInformation
newman run .\test.postman_collection.json -d .\temp.csv
As library:
here we read and split the csv as we want using csv parser and then write it back to a temp file as csv using csv-stringify library.
First, install:
npm install csv
npm install fs
npm i fs-extra
npm install newman
Then use the below code
const newman = require('newman'); // require newman in your project
const stringify = require('csv-stringify')
const parse = require('csv-parse/lib/sync')
const fs = require("fs")
// call newman.run to pass `options` object and wait for callback
let data = fs.readFileSync('./a.csv',
{ encoding: 'utf8', flag: 'r' });
data = parse(data, {
columns: true,
skip_empty_lines: true
})
console.log(data)
//index doesn't consider header so 0 is first data row
stringify(data.slice(0, 2), {
header: true
}, function (err, output) {
fs.writeFileSync("temp.csv", output);
newman.run({
collection: require('./test.postman_collection.json'),
reporters: 'cli',
iterationData: "./temp.csv"
}, function (err) {
if (err) { throw err; }
console.log('collection run complete!');
});
})
Related
I have a pipeline (classic view) with the task "Visual Studio Test", with task version "2.*".
After the task completes I can see that it prints in the log the test results.
How can I save 'Total Tests' and 'Passed Tests' in variables to use with further tasks of the pipeline?
I tried extracting the .trx file but it gets deleted after the task completes.
Performing VsTest gives me this (Some tests fail, but it's OK):
Adding trx file C:\vsts-agent-win-x64-2.165.2\_work\6\s\TestResults\TestResults\----.trx to run attachments
**************** Completed test execution *********************
Result Attachments will be stored in LogStore
Publishing test results to test run '3748'.
TestResults To Publish 189, Test run id:3748
Test results publishing 189, remaining: 0. Test run id: ---
Published test case results: 189
Result Attachments will be stored in LogStore
Run Attachments will be stored in LogStore
Received the command : Stop
TestExecutionHost.ProcessCommand. Stop Command handled
SliceFetch Aborted. Moving to the TestHostEnd phase
Please use this link to analyze the test run : https://---
Test run '---' is in 'Completed' state with 'Total Tests' : 202 and 'Passed Tests' : 19.
##[error]System.Exception: Some tests in the test run did not pass, failing the task.
##########################################################################
Finishing: VsTest - testPlan
When I try to cd into the TestResults:
+ cd C:\vsts-agent-win-x64-2.165.2\_work\6\s\TestResults\TestResults
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (C:\vsts-agent-w...lts\TestResults:String) [Set-Location], ItemNotFoundE
xception
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
##[error]PowerShell exited with code '1'.
You can change the default test result output folder by setting the Test results folder field. See below:
Folder to store test results. When this input is not specified, results are stored in $(Agent.TempDirectory)/TestResults by default, which is cleaned at the end of a pipeline run
In above example. The test result .trx file will be stored at $(System.DefaultWorkingDirectory)\TestResults folder which will not be cleaned up.
Then you can extracting the .trx file in the following tasks and save 'Total Tests' and 'Passed Tests' in variables.
See below screenshot from my test pipeline:
Vstest task log:
Powershell task to ls the contents:
So it seems VsTest deletes all its results after the task is complete.
I solved this with a REST API command.
Make sure you convert your Personal Access Token to Base64...
Here's how I did it:
$personalToken = [your token]
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)"))
$header = #{authorization = "Basic $token"}
$params = #{
Uri = 'https://dev.azure.com/[organization]/[project]/_apis/test/runs?buildIds=[BuildId]&api-version=6.0'
Headers = $header
Method = 'GET'
}
$output = Invoke-RestMethod #params
$run = $output.value | Where-Object{$_.name -match [BuildId]}
Write-Host "Total Tests: $($run.totalTests)"
Write-Host "Passed Tests: $($run.passedTests)"
Write-Host "Failed Tests: $($run.unanalyzedTests)"
Write-Host "Skipped Tests: $($run.incompleteTests)"
I have initially used Mochaawesome report but cannot integrate with AWS. It turned out I need JUnit XML reporter in order to integrate with code build.
I've created Junit XML report but I don't know how to merge them into one xml file so that it can be used in AWS.
XML files got created (which I've been trying to merge them)
Cypress.json file
"reporter": "cypress-multi-reporters",
"reporterOptions": {
"reporterEnabled": "spec, mocha-junit-reporter",
"mochaJunitReporterReporterOptions": {
"mochaFile": "cypress/results/results-[hash].xml"
}
index.js file
"scripts": {
"delete:reports": "rm cypress/results/* || true",
"prereport": "delete:reports",
"report": "cypress run --reporter cypress-multi-reporters --reporter-options mochaFile=cypress/results/results-[hash].xml"
},
"dependencies": {
"cypress-multi-reporters": "^1.4.0",
"junit-report-merger": "^0.0.6",
"mocha": "^8.2.1",
"mocha-junit-reporter": "^2.0.0",
}
Command line (but it doesn't take the password so my tests all fail)
$ yarn report --env password=<password>
I've created a package specially for that purpose. It is called junit-report-merger.
You should write a Nodejs script which will use functions exported from that package:
merge.js
const path = require('path')
const {mergeFiles} = require('junit-report-merger')
const globby = require('globby')
const inputFiles = await globby(['results/report-*.xml'])
const outputFile = path.join(__dirname, 'results', 'combined-report.xml')
mergeFiles(
outputFile,
inputFiles,
err => {
if (err) {
console.error(err)
}
else {
console.log('successfully merged')
}
}
)
Once script is ready, you should run it after your tests. In your case, it will be something like this:
"scripts": {
"report": "cypress run --reporter cypress-multi-reporters --reporter-options mochaFile=cypress/results/results-[hash].xml",
"postreport": "node merge.js"
}
UPDATE
Just released version 1.0.0 of junit-report-merger, which has glob support, allows async/await and offers a CLI.
The code above still should work, but with that version, merge.js file from above can be written in a shorter way:
const path = require('path')
const {mergeFiles} = require('junit-report-merger')
const inputPattern = ['results/report-*.xml']
const outputFile = path.join(__dirname, 'results', 'combined-report.xml')
await mergeFiles(outputFile, inputPattern)
console.log('successfully merged')
But with version 1.0.0 you can avoid creating merge.js completely and use CLI instead.
Like this:
"scripts": {
"report": "cypress run --reporter cypress-multi-reporters --reporter-options mochaFile=cypress/results/results-[hash].xml",
"postreport": "jrm ./results/combined-report.xml \"./cypress/results/results-*.xml\""
}
I have an Azure DevOps pipeline running in YAML.
I'm using the VSTest#2 task to execute some unit tests. This is all working fine and I see the test results appear in the Stage overview UI itself, and in the 'Tests and Coverage' overview in the header.
My YAML pipeline also posts a message to a Slack channel with links to the build, success/failure status and other things. I'm keen to add Test results into the message too...just a simple 'Total tests X - Passed X - Failed X - Skipped X' display. This happens in a separate Stage at the end.
Is there a way to get the tests results from a previous stage in a later stage in the pipline (running on a different agent)?
Are the tests available as an artifact, and if so where are they and in what format?
Would I be right in thinking the only way to do this is via the Azure API? (I can't really be bothered to setup auth with that in the pipeline just for this feature, I don't interact with the API yet anywhere else)
The test results should be generated if you use VSTest#2 task to execute some tests. You can check the task log of VSTest task to check where the test result file is output to. Usually the default test result is trx file. You can change the output location by adding resultsFolder: 'output location' to the vstest task.
Once you get the test result file, you can write scripts to extract the test result summary by adding a script task.
For below example, use powershell script to extract the test summay from trx file and set it to env variable, which make it available in the following task.
- powershell: |
#get the path of the trx file from the output folder.
$path = Get-ChildItem -Path $(Agent.TempDirectory)\TestResults -Recurse -ErrorAction SilentlyContinue -Filter *.trx | Where-Object { $_.Extension -eq '.trx' }
$appConfigFile = $path.FullName #path to test result trx file
#$appConfigFile = '$(System.DefaultWorkingDirectory)\Result\****.trx' #path to test result trx file
$appConfig = New-Object XML
$appConfig.Load($appConfigFile)
$testsummary = $appConfig.DocumentElement.ResultSummary.Counters | select total, passed, failed, aborted
echo "##vso[task.setvariable variable=testSummary]$($testsummary)" #set the testsummary to environment variable
displayName: 'GetTestSummary'
condition: always()
In order to make the variable testSummary available in the following stage, Then you need to add dependency on this stage to the following stage. And use expression dependencies.<Previous stage name>.outputs['<name of the job which execute the task.setvariable >.TaskName.VariableName'] to pass the test summary to the variable in the following stages.
Please check below example
stages:
- stage: Test
displayName: 'Publish stage'
jobs:
- job: jobA
pool: Default
...
- powershell: |
#get the path of the trx file from the output folder.
$path = Get-ChildItem -Path $(Agent.TempDirectory)\TestResults -Recurse -ErrorAction SilentlyContinue -Filter *.trx | Where-Object { $_.Extension -eq '.trx' }
$appConfigFile = $path.FullName #path to test result trx file
#$appConfigFile = '$(System.DefaultWorkingDirectory)\Result\****.trx' #path to test result trx file
$appConfig = New-Object XML
$appConfig.Load($appConfigFile)
$testsummary = $appConfig.DocumentElement.ResultSummary.Counters | select total, passed, failed, aborted
echo "##vso[task.setvariable variable=testSummary]$($testsummary)" #set the testsummary to environment variable
displayName: 'GetTestSummary'
condition: always()
- stage: Release
dependsOn: Test
jobs:
- job: jobA
variables:
testInfo: $[dependencies.Test.outputs['jobA.GetTestSummary.testSummary']]
steps:
Then you can get the extracted test results info by referencing the variable $(testInfo).
Hope above helps!
I have branch folder "feature-set" under this folder there's multibranch
I need to run the below script in my Jenkinsfile with a condition if this build runs from any branches under the "feature-set" folder like "feature-set/" then run the script
the script is:
sh """
if [ ${env.BRANCH_NAME} = "feature-set*" ]
then
echo ${env.BRANCH_NAME}
branchName='${env.BRANCH_NAME}' | cut -d'\\/' -f 2
echo \$branchName
npm install
ng build --aot --output-hashing none --sourcemap=false
fi
"""
the current output doesn't get the condition:
[ feature-set/swat5 = feature-set* ]
any help?
I would re-write this to be primarily Jenkins/Groovy syntax and only go to shell when required.
Based on the info you provided I assume your env.BRANCH_NAME always looks like `feature-set/
// Echo first so we can see value if condition fails
echo(env.BRANCH_NAME)
// startsWith better than contains() based on current usecase
if ( (env.BRANCH_NAME).startsWith('feature-set') ) {
// Split branch string into list based on delimiter
List<String> parts = (env.BRANCH_NAME).tokenize('/')
/**
* Grab everything minus the first part
* This handles branches that include additional '/' characters
* e.g. 'feature-set/feat/my-feat'
*/
branchName = parts[1..-1].join('/')
echo(branchName)
sh('npm install && ng build --aot --output-hashing none --sourcemap=false')
}
This seems to be more on shell side. Since you are planning to use shell if condition the below worked for me.
Administrator1#XXXXXXXX:
$ if [[ ${BRANCH_NAME} = feature-set* ]]; then echo "Success"; fi
Success
Remove the quotes and add an additional "[]" at the start and end respectively.
The additional "[]" works as regex
I have newman 3.9.3 version installed on my ubuntu box. Want to execute multiple collections from a folder but executing js file through me wired error saying
TypeError: newman.run is not a function.
Here is my execution script. Any help will be appreciated.
#!/usr/bin/env node
var newman = require(process.env.NVM_BIN+'/newman');
var fs = require('fs');
fs.readdir('./collections', function (err, files) {
if (err) { throw err; }
files = files.filter(function (file) {
return (file.substr(-5) === '.json');
});
// now wer iterate on each file name and call newman.run using each file name
files.forEach(function (file) {
newman.run({
environment: require(`${__dirname}/live.postmane_environment.json`),
collection: require(`${__dirname}/collections/${file}`),
reporters: ['cli']
}, function (err) {
console.info(`${file}: ${err ? err.name : 'ok'}!`);
});
});
});
Following is the exact error.
/app/postman/execute:15
newman.run({
^
TypeError: newman.run is not a function
at /app/postman/execute:15:16
at Array.forEach (native)
at /app/postman/execute:14:11
at FSReqWrap.oncomplete (fs.js:123:15)
For the time being, I've solved this problem by using bash script to run all of my collections available in a folder. that has done the job. but originally i could not understand why "run" is not available for "newman" object.
I was getting the same error message and solved installing the nodejs package:
npm install --save newman
Basically, when you make the bin as reference the nodejs doesnt know how to run:
Instead of:
var newman = require(process.env.NVM_BIN+'/newman');
Should be:
var newman = require('newman');