Will allure provide a historical report of all the tests that have been run over a long period of time? I am open to other options as well. I am looking for a solution to having a bunch of individual test reports. Previously, Extent Reports has been used but would rather to have all the results in a centralized area that is available to all.
Update: we use Selenium, junit, cucumber, and java
Take a look at Klov.
Docs
Demo
There you have a "Builds" menu where you can see all the reports.
I like it because:
The testcases are updated in real time after execution
I can track progress on different versions
What I don't like:
Screenshots issue
Description and Author are not displayed
Hope it helps!
Related
We have a TFS gated check-in which uses MSTest workflow activity for running our unit tests. We came across some issues recently due to the result folder that MSTest activity creates being too long so some of our unit tests are failing now because of that. It looks it uses a patter like <user>_<machine_name> <date> <time>_<platform>_<config> so we see very lengthy directory names like "tfsbuild_machine123 2015-09-10 10_00_00_Any CPU_Debug". I did some digging into the workflow and its options but couldn't identify where this pattern is coming from. I appreciate if someone can please point me to where this is coming from and how I can change it so we get more room for our unit testing.
I assume that you're referring to test part in the Build Summary page. Like:
As far as I know that the Summary part in the Build Summary page actually is a SummaryFactory type which drives from IBuildDetailFactory, it is not defined in the TFS build process template. The SummaryFactory class contains some functions like CreateSections and CreateNodes which are used to create nodes with on the Summary page, for example, a hyperlink with the format<user>_<machine_name> <date> <time>_<platform>_<config> . However, the SummaryFactory.cs is an internal class so you can't use it in your own program, nor to customize the test hyperlink format.
For your issue, I still would like to check the detailed error message to see what's wrong with it.
We're developing alot of enhancements to salesforce using Visualforce and Apex as part of a larger system, as part of our quality metrics we have to provide a report to management on our Code Coverage.
I'd like to get a report similar to the one produced by Run All Tests in the Force.com IDE but in HTML so I can display it easily via a web interface.
For the rest of our system we use Sonar http://www.sonarsource.org/ to produce the reports.
Does anybody know the best approach to this?
I've explored the API documentation but am unable to find out if the Coverage Percentages is stored against the classes so querying that isn't an option.
Any help or pointers would be greatly appreciated.
If you run the Apex tests yourself via the API there are objects returned indicating which lines hasn't been covered by the tests in that run. You can run the tests via either the synchronous or asynchronous methods.
Then you can use the data to create a report in a format that you require. For example, I've used it to create a basic report in the FuseIT SFDC Explorer (Windows based and free). I'm just dumping out line ranges that aren't covered.
You will probably want to run all the tests in one run to get the complete code coverage of all the tests. For example, in the screenshot above I only ran one out of a much greater number of test classes. As a result it looks like the code coverage was much lower than the cumulative tests give. It does however show which lines an individual test class reaches.
I've also heard good things about MavensMate for the Sublime Text Editor. Being open source you should be able to find how it integrates with the testing api and then generates the reports.
Currently we are generating HTML Reports for Automation, but those reports are not good enough to explain number of scenario which we cover in Automation, Is there anything we can use with Selenium to generate a proper reports which can give a complete overview and can easily understand by anyone
First Thing we can show a complete pie charts which cover number of test case passed and Failed.
Second thing we can show, what are test cases are there in this build.
You may also checkout the page http://seleniumtesting-nx.blogspot.de/2012/03/testng-xslt-report-generation.html
I found it useful. :)
It's easy to write your own TestNG reporter, look up IReporter in the doc.
How would you "unit test" a report created by some report engine like Crystal Reports or SQL Server Reporting Services?
The problem with reports is akin to the problem with GUIs.
If the Report/GUI has lot of (misplaced) intelligence it is going to make testing difficult.
The solution then is to
Separated Presentation : Separate presentation from content (data-access/domain/business rules). In the current context would mean, that you create some sort of ViewModel class that mirrors the content of the final report (e.g. if you have order details and line items in your report, this class should have properties for the details and a list of line item objects). The ViewModel is infinitely simpler to test. The last-mile, applying presentation to the content should be relatively trivial (thin UI).
e.g. if you use xslt to render your reports, you can test the data xml using tools like XmlUnit or string compare. You can reasonable confident in xsl transformations on the data xml for the final report... Also any bugs in here would be trivial to fix.
However if you're using third party vendors like Crystal Reports, you have no control / access to hook in to the report generation. In such cases, the best you can do is generate representative/expected output files (e.g. pdfs) called Golden Files. Use this as a read-only resource in your tests to compare the actual output. However this approach is very fragile.. in that any substantial change to the report generation code might render all previous Golden Files incorrect. So they would have to be regenerated. If the cost to benefit ratio of automation is too high, I'd say Go manual with old-school word doc test plans.
For testing our own Java-based reporting product, i-net Clear Reports, we run a whole slew of test reports once, exporting them to various export formats, make sure the output is as desired, and then continously have these same reports run daily, comparing the results to the original data. Any differences then show up as test failures.
It has worked pretty well for us. Disadvantage of this is any minor differences that might not make any difference show up as test failures until the test data is reset.
Side note: this isn't exactly a unit test but rather an acceptance test. But I don't see how you could truly "unit test" an entire report.
The best I can think of, is comparing the results to an expected output.
Maybe some intelligence can be added, but it is not that easy to test these big blocks.
I agree with Gamecat.
Generate the report from fixed (constant) data, and compare it to the expected output for that data.
After that you might be able to use simple tests such as diff (checking if the files are identical)
My current idea is to create tests at two levels:
Unit tests: Structure the report to enable testing using some ideas for testing a UI, like Humble View. The report itself will be made as dumb as possible. It should consist mostly of just simple field bindings. The data items/objects that act as the source of these bindings can then be unit tested.
Acceptence tests: Generate some example reports. Verify them by hand first. Then setup an automated test that does a compare using diff.
We develop custom survey web sites and I am looking for a way to automate the pattern testing of these sites. Surveys often contain many complex rules and branches which are triggered on how items are responded too. All surveys are rigorously tested before being released to clients. This testing results in a lot of manual work. I would like to learn of some options I could use to automate these tests by responding to questions and verifying the results in the database. The survey sites are produced by an engine which creates and writes asp pages and receives the responses to process into a database. So the only way I can determine to test the site is to interact with the web pages themselves. I guess in a way I need to build some type of bot; I really don't know much about the design behind them.
Could someone please provide some suggestions on how to achieve this? Thank you for your time.
Brett
Check out selenium: http://selenium.openqa.org/
Also, check out the answers to this other question: https://stackoverflow.com/questions/484/how-do-you-test-layout-design-across-multiple-browsersoss
You could also check out WatiN.
Sounds like your engine could generate a test script using something like Test::WWW::Mechanize
Usual test methodologies applies; white box and black box.
White box testing for you may mean instrumenting your application to be able to make it go into a particular state, then you can predict the the result you expect.
Black box may mean that you hit a page, then consider of the possible outcomes valid. Repeat and rinse till you get sufficient coverage.
Another thing we use is monitoring statistics for our service. Did we get the expected number of hits on this page. We routinely run a/b tests, and I have run a/b tests against refactored code to verify that nothing changed before rolling things out.
/Allan
I can think of a couple of good web application testing suites that should get the job done - one free/open source and one commercial:
Selenium (open source/cross platform)
TestComplete (commercial/Windows-based)
Both will let you create test suites by verifying database records based on interactions with the web app.
The fact that you're Windows/ASP based might mean that TestComplete will get you up and running faster, as it's native to Windows and .NET. You can download a free trial to see if it'll work for you before making the investment.
Check out the unit testing framework 'lime' that comes with the Symfony framework. http://www.symfony-project.org/book/1_0/15-Unit-and-Functional-Testing. You didn't mention you language, lime is php.
I would suggest the mechanize gem,available for ruby . It's pretty intuitive to use .
I use the QEngine(commerical) for the same purpose. I need to add a data and check the same in the UI. I write one script which does this and call that in a loop. the data can be passed via either csv or excel.
check that www.qengine.com , you can try Watir also.
My proposal is QA Agent (http://qaagent.com). It seems this is a new approach because you do not need to install anything. Just develop your web tests in the browser based ide. By the way you can develop your tests using jQuery and java script. Really cool!