We are running an automated test where each file counts as its own test. We are using the Dynamic Data attribute to provide files that need to be tested. Currently, each file gets tested, however in the TRX file, it is logging them, essentially, as sub-tests, and if any 1 test fails, then the whole bucket counts as failing. This gives us inaccurate reads as far as how many files actually failed or passed (because if one failed, then the entire thing gets marked as failing) when we publish our test results in the azure pipeline. Is there a way to mark these subtests as actual tests so that the counting is done accurately?
Here you can see 2 of the subtests actually passed, however that is not reflected in the results in the header (note that the reason it say 0/4 instead of 0/1 is that there are 3 other similar test "buckets" that have passing and failing tests but are also being marked as all failing.
Related
I am trying to understand why my JUnit XML report results in an Incomplete status on AWS CodeBuild.
The XML is produced by Kaocha, a Clojure test runner, through its kaocha-junit-xml plugin.
At the end of my test run, the XML is generated and then processed in the UPLOAD_ARTIFACTS phase which is where it does a calculation and that results in:
error processing report: [InvalidInputException: Test summary: status count cannot be negative]]
I do have multiple assertions per test, and thus there may be more than 1 failure per test.
To verify that I'm not having a buggy JUnit XML file, I have installed Jenkins and ran a couple of tests, which works and it does not end in an Incomplete Report status.
Note that the Test Run status is Failed, and only the Report status is Incomplete.
Please review your testsuite property in the JUnit XML, e.g.
tests="1" failures="1" errors="1"
Here, the failures and errors are treated differently and this makes the errors = 2 (1+1) which is less than the number of tests (= 1) causing the negative status count (1-2).
I am not sure about the JUnit format, but if you can tweak it and make sure either failure or error is populated (not both), then this error ("status count cannot be negative") will not appear.
Is there a way to incorporate a clean up script in Postman?
Use case: After the collection run : (either success or failure). I need to clear data in some of the databases/data-stores
similar construct to try{} finally{}
for eg : as a part of collection runner contains two apis
api1 -> which puts the data in redis.
api2 -> functional verification
(expecting the clean up hook) to remove the data from that was put in step 1.
writing at the end of test script of api2 will work fine only if there are no errors in execution of test script.
the problem gets worse when there are large number of apis and multiple data entries. We can handle this by setNextRequest, however that brings additional code to be written in each test script.
You could probably achieve this by running the collection file within a script, using Newman. This should give you more flexibility and control over running certain actions at different points before, during and after the run.
More information about the different options can be found here: https://github.com/postmanlabs/newman/blob/develop/README.md#api-reference
If its just clearing out certian variable values, this can be done within the Tests tab of the last request in your collection.
If testing my method which is supposed to return a value based on a certain criteria (maybe it's validating credentials)
testAuthenticate_ValidCredentials_ReturnTrue
Should I also write separate methods to test whether it returns the correct value if the criteria isn't met?
testAuthenticate_InValidCredentials_ReturnFalse
In other words, should I run multiple tests per method?
yes, it is better to tailor each test to check only one funcational aspect of your code, so separate tests for valid (authenticated) and invalid (rejected) credentials is the proper approach.
As to the larger issue of how many tests to write total, ideally you want to run every source line in the code being tested.
We have managed to have Jenkins correctly parse our XML output from our tests and also included the error information, when there is one. So that it is possible to see, directly in the TestCase in Jenkins the error that occurred.
What we would like to do is to have Jenkins keep a log output, which is basically the console output, associated with each case. This would enable anyone to see the actual console output of each test case, failed or not.
I haven't seen a way to do this.
* EDIT *
Clarification - I want to be able to see the actual test output directly in the Jenkins interface, the same way it does when there is an error, but for the whole output. I don't want only Jenkins to keep the file as artifact.
* END OF EDIT *
Anyone can help us on this?
In the Publish JUnit test result report (Post-build Actions) tick the Retain long standard output/error checkbox.
If checked, any standard output or error from a test suite will be
retained in the test results after the build completes. (This refers
only to additional messages printed to console, not to a failure stack
trace.) Such output is always kept if the test failed, but by default
lengthy output from passing tests is truncated to save space. Check
this option if you need to see every log message from even passing
tests, but beware that Jenkins's memory consumption can substantially
increase as a result, even if you never look at the test results!
This is simple to do - just ensure that the output file is included in the list of artifacts for that job and it will be archived according to the configuration for that job.
Not sure if you have solve it yet, but I just did something similar using Android and Jenkins.
What I did was using the http://code.google.com/p/the-missing-android-xml-junit-test-runner/ to run the tests in the Android emulator. This will create the necessary JUnit formatted XML files, on the emulator file system.
Afterwards, simply use 'adb pull' to copy the file over, and configure the Jenkin to parse the results. You can also artifact the XML files if necessary.
If you simply want to display the content of the result in the log, you can use 'Execute Shell' command to print it out to the console, where it will be captured in the log file.
Since Jenkins 1.386 there was a change mentioned to Retain long standard output/error in each build configuration. So you just have to check the checkbox in the post-build actions.
http://hudson-ci.org/changelog.html#v1.386
When using a declarative pipeline, you can do it like so:
junit testResults: '**/build/test-results/*/*.xml', keepLongStdio: true
See the documentation:
If checked, the default behavior of failing a build on missing test result files or empty test results is changed to not affect the status of the build. Please note that this setting make it harder to spot misconfigured jobs or build failures where the test tool does not exit with an error code when not producing test report files.
I'd like to unit test a gen_fsm that uses a fairly large record for its state. The record is defined within the erl file that also defines the gen_fsm and thus is not (to my knowledge) visible to other modules.
Possible approaches:
Put the record into an hrl file and include that in both modules. This is ok, but spreads code that is logically owned by the gen_fsm across multiple files.
Fake a record with a raw tuple in the unit test module. This would get pretty ugly as the record is already over 20 fields.
Export a function from my gen_fsm that will convert a proplist to the correct record type with some record_info magic. While possible, I don't like the idea of polluting my module interface.
Actually spawn the gen_fsm and send it a series of messages to put it in the right state for the unit test. There is substantial complexity to this approach (although Meck helps) and I feel like I'm wasting these great, pure Module:StateName functions that I should be able to call without a whole bunch of setup.
Any other suggestions?
You might consider just putting your tests directly into your gen_fsm module, which of course would give them access to the record. If you'd rather not include the tests in production code, and assuming you're using eunit, you can conditionally compile them in or out as indicated in the eunit user's guide:
-ifdef(EUNIT).
% test code here
...
-endif.