peoples! =)
Got some problem)
I ve got Windows 10 64Bit and use newman-reporter-testrail
On the Postman: I have two methods, each with three tests.
On TestRail: three test cases for these tests and a testrun with them.
I created system variables for windows: TESTRAIL_USERNAME, TESTRAIL_API key, TESTRAIL_RUNID, TESTRAIL_DOMAIN and filled them in.
Inserted the code from the cases in the TestRail tests postman.
When I run the Newman collection in JSON with a reporter I get the error "Newman-reporter-testrail: no test cases were found".
Screenshot Newman error
Oh, finally I got it! In Postman my tests were arranged one after the other without space separation. And reporter can't find them. But it works when I run them on Newman only.
Related
Is there a way to incorporate a clean up script in Postman?
Use case: After the collection run : (either success or failure). I need to clear data in some of the databases/data-stores
similar construct to try{} finally{}
for eg : as a part of collection runner contains two apis
api1 -> which puts the data in redis.
api2 -> functional verification
(expecting the clean up hook) to remove the data from that was put in step 1.
writing at the end of test script of api2 will work fine only if there are no errors in execution of test script.
the problem gets worse when there are large number of apis and multiple data entries. We can handle this by setNextRequest, however that brings additional code to be written in each test script.
You could probably achieve this by running the collection file within a script, using Newman. This should give you more flexibility and control over running certain actions at different points before, during and after the run.
More information about the different options can be found here: https://github.com/postmanlabs/newman/blob/develop/README.md#api-reference
If its just clearing out certian variable values, this can be done within the Tests tab of the last request in your collection.
The need here is to run a pre-request script before each call in a folder of a postman collection, optionally, while running from Newman collection.
For example, if running a test suite of 10 calls in one folder, the call would usually be:
newman run <collectionPath> --folder <folderPath>
Is there any option of passing something like,
newman run <collectionPath> --folder <folderPath> --pre-request_script someScript.js --test_script someTest.js
?
The reason why (an obvious) postman collection test / pre-request script is not being used is that
(the main reason) huge amounts of collections are already written and it will be difficult to go into each one of them and add this code. It will be way more convenient to govern this behavior via command line.
the test / pre-request script may vary across different newman runs and these parameters would remove the need of complex conditional code within pre-requests / test scripts.
Is there any other alternative or solution for the same?
From the latest version you can add pre-requests, tests, variables directly to the collection or something different on each sub folder. These collections can just be used in the normal way via newman. It might solve your problem.
http://blog.getpostman.com/2017/12/13/keep-it-dry-with-collection-and-folder-elements/
In SonarQube 5.6.6, I can see on http://example.com/component_measures/metric/test_failures/list?id=myproject that my unit test results were successfully imported. This is indicated by
Unit Test Failures: 1
which I produced by a fake failing test.
I also see the filename of the failing test class in a long list, and I see the number of failed tests (again: 1).
But I can't find any more information: which method, stack trace, stdout/err, just everything which is also included in the build/reports/test/index.html files generated by gradle? Clicking to the list entry points me to the code and coverage view, but I can't find any indicator, which test failed.
Am I doing something wrong in the frontend, is it a configuration problem, or am I looking for a feature which doesn't exist in SonarQube?
This is how it looks currently:
http://example.com/component_measures/domain/Coverage: Here I see that one test failed:
http://example.com/component_measures/metric/test_success_density/list: I can see which file it is:
But clicking on the line above only points me to the source file. Below the test which "failed". There is no indication that this test failed. And I can't find any way to see stack trace or the name of the failed test method:
Btw: The page of the first screenshot show Information about unit tests. But if the failing test is an integration test, I don't even see these numbers
Update
Something like this is probably what I'm looking for:
(found on https://deors.wordpress.com/2014/07/04/individual-test-coverage-sonarqube-jacoco/)
I never saw such a view on my installation, don't know how to get it and if it is implemented in the current version.
Unfortunately Test execution details is a deprecated feature Sonar Qube 5.6
If you install older version such as Sonar Qube 4.x, we will get following screen which provides test case result details.
But this screen itself has been removed.
Ref # https://jira.sonarsource.com/browse/SONARCS-657
Basically the issue is Unit test case details report requires links back to source code files. But now the unit test cases are only linked to assemblies.
I've written several XMLUnit tests (that fit in to the JUnit framework) in groovy and can execute them easily on the command line as per the groovy doco but I don't quite understand what else I've got to do for it to produce the xml output that is needed by Jenkins/Hudson (or other) to display the pass/fail results (like this) and detailed report of the errors etc (like this). (apologies to image owners)
Currently, my kickoff script is this:
def allSuite = new TestSuite('The XSL Tests')
//looking in package xsltests.rail.*
allSuite.addTest(AllTestSuite.suite("xsltests/rail", "*Tests.groovy"))
junit.textui.TestRunner.run(allSuite)
and this produces something like this:
Running all XSL Tests...
....
Time: 4.141
OK (4 tests)
How can I make this create a JUnit test report xml file suitable to be read by Jenkins/Hudson?
Do I need to kick off the tests with a different JUnit runner?
I have seen this answer but would like to avoid having to write my own test report output.
After a little hackage I have taken Eric Wendelin's suggestion and gone with Gradle.
To do this I have moved my groovy unit tests into the requisite directory structure src/test/groovy/, with the supporting resources (input and expected output XML files) going into the /src/test/resources/ directory.
All required libraries have been configured in the build.gradle file, as described (in its entirety) here:
apply plugin: 'groovy'
repositories {
mavenCentral()
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.+'
groovy module('org.codehaus.groovy:groovy:1.8.2') {
dependency('asm:asm:3.3.1')
dependency('antlr:antlr:2.7.7')
dependency('xmlunit:xmlunit:1.3')
dependency('xalan:serializer:2.7.1')
dependency('xalan:xalan:2.7.1')
dependency('org.bluestemsoftware.open.maven.tparty:xerces-impl:2.9.0')
dependency('xml-apis:xml-apis:2.0.2')
}
}
test {
jvmArgs '-Xms64m', '-Xmx512m', '-XX:MaxPermSize=128m'
testLogging.showStandardStreams = true //not sure about this one, was in official user guide
outputs.upToDateWhen { false } //makes it run every time even when Gradle thinks it is "Up-To-Date"
}
This applies the Groovy plugin, sets up to use maven to grab the specified dependencies and then adds some extra values to the built-in "test" task.
One extra thing in there is the last line which makes Gradle run all of my tests every time and not just the ones it thinks are new/changed, this makes Jenkins play nicely.
I also created a gradle.properties file to get through the corporate proxy/firewall etc:
systemProp.http.proxyHost=10.xxx.xxx.xxx
systemProp.http.proxyPort=8080
systemProp.http.proxyUser=username
systemProp.http.proxyPassword=passwd
With this, I've created a 'free-style' project in Jenkins that polls our Mercurial repo periodically and whenever anyone commits an updated XSL to the repo all the tests will be run.
One of my original goals was being able to produce the standard Jenkins/Hudson pass/fail graphics and the JUnit reports, which is a success: Pass/Fail with JUnit Reports.
I hope this helps someone else with similar requirements.
I find the fastest way to bootstrap this stuff is with Gradle:
# build.gradle
apply plugin: 'groovy'
task initProjectStructure () << {
project.sourceSets.all*.allSource.sourceTrees.srcDirs.flatten().each { dir ->
dir.mkdirs()
}
}
Then run gradle initProjectStructure and move your source into src/main/groovy and tests to test/main/groovy.
It seems like a lot (really it's <5 minutes of work), but you get lots of stuff for free. Now you can run gradle test and it'll run your tests and produce JUnit XML you can use in build/test-reports in your project directory.
Since you're asking for the purposes of exposing the report to Jenkins/Hudson, I'm assuming you have a Maven/Ant/etc build that you're able to run. If that's true, the solution is simple.
First of all, there's practically no difference between Groovy and Java JUnit tests. So, all you need to do is add the Ant/Maven junit task/plugin to your build and have it execute your Groovy junit tests (just as you'd do if they were written in Java). That execution will create test reports. From there, you can simply configure your Hudson/Jenkins build to look at the directory where the test reports get created during the build process.
You can write your own custom RunListener (or SuiteRunListener). It still requires you to write some code, but it's much cleaner than the script you've provided a link to. If you'd like, I can send you the code for a JUnit reporter I've written in JavaScript for Jasmine and you can 'translate' it into Groovy.
We have managed to have Jenkins correctly parse our XML output from our tests and also included the error information, when there is one. So that it is possible to see, directly in the TestCase in Jenkins the error that occurred.
What we would like to do is to have Jenkins keep a log output, which is basically the console output, associated with each case. This would enable anyone to see the actual console output of each test case, failed or not.
I haven't seen a way to do this.
* EDIT *
Clarification - I want to be able to see the actual test output directly in the Jenkins interface, the same way it does when there is an error, but for the whole output. I don't want only Jenkins to keep the file as artifact.
* END OF EDIT *
Anyone can help us on this?
In the Publish JUnit test result report (Post-build Actions) tick the Retain long standard output/error checkbox.
If checked, any standard output or error from a test suite will be
retained in the test results after the build completes. (This refers
only to additional messages printed to console, not to a failure stack
trace.) Such output is always kept if the test failed, but by default
lengthy output from passing tests is truncated to save space. Check
this option if you need to see every log message from even passing
tests, but beware that Jenkins's memory consumption can substantially
increase as a result, even if you never look at the test results!
This is simple to do - just ensure that the output file is included in the list of artifacts for that job and it will be archived according to the configuration for that job.
Not sure if you have solve it yet, but I just did something similar using Android and Jenkins.
What I did was using the http://code.google.com/p/the-missing-android-xml-junit-test-runner/ to run the tests in the Android emulator. This will create the necessary JUnit formatted XML files, on the emulator file system.
Afterwards, simply use 'adb pull' to copy the file over, and configure the Jenkin to parse the results. You can also artifact the XML files if necessary.
If you simply want to display the content of the result in the log, you can use 'Execute Shell' command to print it out to the console, where it will be captured in the log file.
Since Jenkins 1.386 there was a change mentioned to Retain long standard output/error in each build configuration. So you just have to check the checkbox in the post-build actions.
http://hudson-ci.org/changelog.html#v1.386
When using a declarative pipeline, you can do it like so:
junit testResults: '**/build/test-results/*/*.xml', keepLongStdio: true
See the documentation:
If checked, the default behavior of failing a build on missing test result files or empty test results is changed to not affect the status of the build. Please note that this setting make it harder to spot misconfigured jobs or build failures where the test tool does not exit with an error code when not producing test report files.