Is there a way to mark a test as XFAIL in robot framework?
I would like to execute the tests and if they have a defect tag associated with them I would like to mark them as XFAIL.
Is it possible to implement this using ResultWriter or any other module.
I found this as a workaround that seems to be acceptable: Add this to your test case:
# REMOVE WHEN FIXED!
Pass execution
... This test fails and is a known bug! (add bug ref here)
... Known Bugs
This way known bugs are forced to pass but are tagged as "Known Bugs" and are visible in the run log.
If having support for multiple test case statuses is important, have a look at the Robot Framework plugin for generating XML files that are compatible with the Allure reports. Have a look here for an example report.
In the Allure reporting there are 5 statuses (Failed, Broken, Cancelled, Pending and Passed) in addition to 5 severity statuses (Blocker, Critical, Normal, Minor & Trivial). As Robot doesn't support these statuses a lookup is done on the set tags and based on those it determines the Allure Status and Severity.
In case Allure Report plugin doesn't work for you, perhaps you can use it's approach to generate a log file of your own through Robot Listener functionality. This is a set of predefined events that you can create a class for. Log Message and Message being two of particular interest to you.
The other one is a recently released project Robot Background logger that extends the standard logger class of Robotframework. This should provide some control over the formatting of the message.
Related
I'm looking for a unit test framework that tracks every assert in the code, pass or fail. I looked into Google Test, which is based on xUnit, and it only tracks failures. I need this because I work in a company that makes medical devices and we must keep evidence of validation that can be audited by the FDA. We want a test report that tells you what the test did, not just that it passed. Also, the framework would have to be usable with POSIX C++.
Ideally what I would like to have is something like this (using Google Test syntax):
EXPECT_EQ(1, x, "checking x value");
and the test would generate a report that has the following for every assert: a description, the expected value, the actual value, the comparison type, and a pass/fail status.
It looks like I'll have to create my own test framework to accomplish this. I stepped into the code of Google Test to verify it really does nothing for a passing assert. I wanted to see if there were other ideas, such as a framework that could accomplish this or be modified to accomplish this before creating my own.
Why not simply generate a json/xml/html report as part of your build process and then check that file into some kind of source control?
I'm looking for a piece (or a set) of software that allows to store the outcome (ok/failed) of an automatic test and additional information (the test protocol to see the exact reason for a failure and the device state at the end of a test run as a compressed archive). The results should be accessible via a web UI.
I don't need fancy pie charts or colored graphs. A simple table is enough. However, the user should be able to filter for specific test runs and/or specific tests. The test runs should have a sane name (like the version of the software that was tested, not just some number).
Currently the build system includes unit tests based on cmake/ctest whose results should be included. Furthermore, integration testing will be done in the future, where the actual tests will run on embedded hardware controlled via network by a shell script or similar. The format of the test results is therefore flexible and could be something like subunit or TAP, if that helps.
I have played around with Jenkins, which is said to be great for automatic tests, but the plugins I tried to make that work don't seem to interact well. To be specific: the test results analyzer plugin doesn't show tests imported with the TAP plugin, and the names of the test runs are just a meaningless build number, although I used the Job Name Setter plugin to set a sensible job name. The filtering options are limited, too.
My somewhat uneducated guess is that I'll stumple about similar issues if I try other random tools of the same class like Jenkins.
Is anyone aware of a solution for my described testing scenario? Lightweight/open source software is preferred.
I am working on an existing web application (written in .NET) which, surprisingly, has a few bugs in it. We track outstanding in a bug tracker (JIRA, in this case), and have some test libraries in place already (written in NUnit).
What I would like to be able to do is, when closing an issue, to be able to link that issue to the unit-/integration-test that ensures that a regression does not occur, and I would like to be able to expose that information as easily as possible.
There are a few things I can think of off-hand, that can be used in different combinations, depending on how far I want to go:
copy the URL of the issue and paste it as a comment in the test code;
add a Category attribute to the test and name it Regressions, so I can select regression tests explicitly and run them as a group (but how to automatically report on which issues have failed regression testing?);
make the issue number part of the test case name;
create a custom Regression attribute that takes the URI of the issue as a required parameter;
create a new custom field in the issue tracker to store the name (or path) of the regression test(s);
The ideal scenario for me would be that I can look at the issue tracker and see which issues have been closed with regression tests in place (a gold star to that developer!), and to look at the test reports and see which issues are failing regression tests.
Has anyone come across, or come up with, a good solution to this?
i fail to see what makes regression tests different from any other tests. why would you want to run only regression tests or everything except regression tests. if regression or non-regression test fails that means this specific functionality is not working and product owner has to decide how critical the problem is. if you stop differentiate tests then simply do code reviews and don't allow any commits without tests.
in case we want to see what tests have been added for specific issues we go to issue tracking system. there are all commits, ideally only one (thanks squashing), issue tracker is connected to git so we can easily browse changed files
in case we want to see (for whatever reason) what issue is related to some specific test/line of code we just give tests a meaningful business name which helps finding any related information. if problem is more technical and we know may need specific issue then we simply add issue number in comment. in case you want to automate retrieval just standardize the format of the comment
another helpful technique is to structure your program correctly. if every functionality has it's own package and tests are packaged and named meaningfully then it's also easy ro find any related code
How to control output from Twisted-trial tests?
I have looked up for different solutions, but I'm quite new to testing, so I can't find a fitting solution or can't use it correctly.
In general, I try to make autotesting system for my project like BuildBot. But BuildBot doesn't fit me because it reacts only to "On change sources" hook from Mercurial and I want to use other hooks too.
On THIS page from BuildBot documentation I found this information:
One advantage of trial is that the Buildbot happens to know how to
parse trial output, letting it identify which tests passed and which
ones failed. The Buildbot can then provide fine-grained reports about
how many tests have failed, when individual tests fail when they had
been passing previously, etc.
Does it mean that there is no way but to parse information from test-output?
Other possible solutions?
Besides, I looked up in Twisted documentation and found this class IReporter.
Is it a solution and if it is, how can I use it?
If it isn't, are there any other solutions?
P.S. Please, note, that tests have already been written, so I can only start them and can't modify source code.
You can format output from trial arbitrarily by writing a reporter plugin. You found the interface for that plugin already - IReporter.
Once you write such a plugin, you'll be able to use it by adding --reporter=yourplugin to your trial command line arguments.
You can see the list of reporter plugins that are already available on your system using trial --help-reporters. If you have python-subunit installed then you'll see subunit which is a machine-parseable format that might already satisfy your requirements. Unfortunately it's still a subunit v1 reporter and subunit v2 is better in a number of ways. Still, it might suffice.
How would you "unit test" a report created by some report engine like Crystal Reports or SQL Server Reporting Services?
The problem with reports is akin to the problem with GUIs.
If the Report/GUI has lot of (misplaced) intelligence it is going to make testing difficult.
The solution then is to
Separated Presentation : Separate presentation from content (data-access/domain/business rules). In the current context would mean, that you create some sort of ViewModel class that mirrors the content of the final report (e.g. if you have order details and line items in your report, this class should have properties for the details and a list of line item objects). The ViewModel is infinitely simpler to test. The last-mile, applying presentation to the content should be relatively trivial (thin UI).
e.g. if you use xslt to render your reports, you can test the data xml using tools like XmlUnit or string compare. You can reasonable confident in xsl transformations on the data xml for the final report... Also any bugs in here would be trivial to fix.
However if you're using third party vendors like Crystal Reports, you have no control / access to hook in to the report generation. In such cases, the best you can do is generate representative/expected output files (e.g. pdfs) called Golden Files. Use this as a read-only resource in your tests to compare the actual output. However this approach is very fragile.. in that any substantial change to the report generation code might render all previous Golden Files incorrect. So they would have to be regenerated. If the cost to benefit ratio of automation is too high, I'd say Go manual with old-school word doc test plans.
For testing our own Java-based reporting product, i-net Clear Reports, we run a whole slew of test reports once, exporting them to various export formats, make sure the output is as desired, and then continously have these same reports run daily, comparing the results to the original data. Any differences then show up as test failures.
It has worked pretty well for us. Disadvantage of this is any minor differences that might not make any difference show up as test failures until the test data is reset.
Side note: this isn't exactly a unit test but rather an acceptance test. But I don't see how you could truly "unit test" an entire report.
The best I can think of, is comparing the results to an expected output.
Maybe some intelligence can be added, but it is not that easy to test these big blocks.
I agree with Gamecat.
Generate the report from fixed (constant) data, and compare it to the expected output for that data.
After that you might be able to use simple tests such as diff (checking if the files are identical)
My current idea is to create tests at two levels:
Unit tests: Structure the report to enable testing using some ideas for testing a UI, like Humble View. The report itself will be made as dumb as possible. It should consist mostly of just simple field bindings. The data items/objects that act as the source of these bindings can then be unit tested.
Acceptence tests: Generate some example reports. Verify them by hand first. Then setup an automated test that does a compare using diff.