How would you "unit test" a report created by some report engine like Crystal Reports or SQL Server Reporting Services?
The problem with reports is akin to the problem with GUIs.
If the Report/GUI has lot of (misplaced) intelligence it is going to make testing difficult.
The solution then is to
Separated Presentation : Separate presentation from content (data-access/domain/business rules). In the current context would mean, that you create some sort of ViewModel class that mirrors the content of the final report (e.g. if you have order details and line items in your report, this class should have properties for the details and a list of line item objects). The ViewModel is infinitely simpler to test. The last-mile, applying presentation to the content should be relatively trivial (thin UI).
e.g. if you use xslt to render your reports, you can test the data xml using tools like XmlUnit or string compare. You can reasonable confident in xsl transformations on the data xml for the final report... Also any bugs in here would be trivial to fix.
However if you're using third party vendors like Crystal Reports, you have no control / access to hook in to the report generation. In such cases, the best you can do is generate representative/expected output files (e.g. pdfs) called Golden Files. Use this as a read-only resource in your tests to compare the actual output. However this approach is very fragile.. in that any substantial change to the report generation code might render all previous Golden Files incorrect. So they would have to be regenerated. If the cost to benefit ratio of automation is too high, I'd say Go manual with old-school word doc test plans.
For testing our own Java-based reporting product, i-net Clear Reports, we run a whole slew of test reports once, exporting them to various export formats, make sure the output is as desired, and then continously have these same reports run daily, comparing the results to the original data. Any differences then show up as test failures.
It has worked pretty well for us. Disadvantage of this is any minor differences that might not make any difference show up as test failures until the test data is reset.
Side note: this isn't exactly a unit test but rather an acceptance test. But I don't see how you could truly "unit test" an entire report.
The best I can think of, is comparing the results to an expected output.
Maybe some intelligence can be added, but it is not that easy to test these big blocks.
I agree with Gamecat.
Generate the report from fixed (constant) data, and compare it to the expected output for that data.
After that you might be able to use simple tests such as diff (checking if the files are identical)
My current idea is to create tests at two levels:
Unit tests: Structure the report to enable testing using some ideas for testing a UI, like Humble View. The report itself will be made as dumb as possible. It should consist mostly of just simple field bindings. The data items/objects that act as the source of these bindings can then be unit tested.
Acceptence tests: Generate some example reports. Verify them by hand first. Then setup an automated test that does a compare using diff.
Related
I'm looking for a piece (or a set) of software that allows to store the outcome (ok/failed) of an automatic test and additional information (the test protocol to see the exact reason for a failure and the device state at the end of a test run as a compressed archive). The results should be accessible via a web UI.
I don't need fancy pie charts or colored graphs. A simple table is enough. However, the user should be able to filter for specific test runs and/or specific tests. The test runs should have a sane name (like the version of the software that was tested, not just some number).
Currently the build system includes unit tests based on cmake/ctest whose results should be included. Furthermore, integration testing will be done in the future, where the actual tests will run on embedded hardware controlled via network by a shell script or similar. The format of the test results is therefore flexible and could be something like subunit or TAP, if that helps.
I have played around with Jenkins, which is said to be great for automatic tests, but the plugins I tried to make that work don't seem to interact well. To be specific: the test results analyzer plugin doesn't show tests imported with the TAP plugin, and the names of the test runs are just a meaningless build number, although I used the Job Name Setter plugin to set a sensible job name. The filtering options are limited, too.
My somewhat uneducated guess is that I'll stumple about similar issues if I try other random tools of the same class like Jenkins.
Is anyone aware of a solution for my described testing scenario? Lightweight/open source software is preferred.
Dynamics AX 2012 comes with unit testing support.
To have meaningful tests some test data needs to be provided (stored in tables in the database).
To get a reproducable outcome of the unit tests we need to have the same data stored in the tables every time the tests are run. Now the question is, how can we accomplish this?
I learned that there is the possibility of setting the isolation level for the TestSuite to SysTestSuiteCompanyIsolateClass. This will create an empty company and delete the company after the tests have been run. In the setup() method I can fill my testdata into the tables with insert statements. This works fine for small scenarios but becomes cumbersome very fast if you have a real life project.
I was wondering if there is anyone out there with a practical solution of how to use the X++ Unit Test Framework in a real world scenario. Any input is very much appreciated.
I agree that creating test data in a new and empty company only works for fairly trivial scenarios or scenarios where you implemented the whole data structure yourself. But as soon as existing data structures are needed, this approach can become very time consuming.
One approach that worked well for me in the past is to run unit tests in a existing company that already has most of the configuration data (e.g. financial setup, inventory setup, ...) needed to run the test. The test itself runs in a ttsBegin - ttsAbort block so that the unit test does not actually create any data.
Another approach is to implement data provider methods that are test agnostic, but create data that is often used in unit tests (e.g. a method that creates a product). It takes some time to create a useful set of data provider methods, but once they exist, writing unit tests becomes a lot faster. See SysTest part V.: Test execution (results, runners and listeners) on how Microsoft uses a similar approach (or at least they used to back in 2007 for AX 4.0).
Both approaches can also be combined, you would call the data provider methods inside the ttsBegin - ttsAbort block to create the needed data only for the unit test.
Another useful method is to use doInsert or doUpdate to create your test data, especially if you are only interested in a few fields and do not need to create a completely valid record.
I think that the unit test framework was an afterthought. In order to really use it, Microsoft would have needed to provide unit test classes, then when you customize their code, you also customize their unit tests.
So without that, you're essentially left coding unit tests that try and encompass base code along with your modifications, which is a huge task.
Where I think you can actually use it is around isolated customizations that perform some function, and aren't heavily built on base code. And also with customizations that are integrations with external systems.
Well, from my point of view, you will not be able to leverage more than what you pointed from the standard framework.
What you can do is more around release management. You can setup an integration environment with the targeted data and push your nightbuild model into this environmnet at the end of the build process and then run your tests.
Yes, it will need more effort to set it up and to maintain but it's the only solution I've seen untill now to have a large and consistent set of data to run unit or integration tests on.
To have meaningful tests some test data needs to be provided (stored
in tables in the database).
As someone else already indicated - I found it best to leverage an existing company for data. In my case, several existing companies.
To get a reproducable outcome of the unit tests we need to have the
same data stored in the tables every time the tests are run. Now the
question is, how can we accomplish this?
We have built test helpers, that help us "run the test", automating what a person would do - give you have architeced your application to be testable. In essence our test class uses the helpers to run the test, then provides most of the value in validating the data it created.
I learned that there is the possibility of setting the isolation level
for the TestSuite to SysTestSuiteCompanyIsolateClass. This will create
an empty company and delete the company after the tests have been run.
In the setup() method I can fill my testdata into the tables with
insert statements. This works fine for small scenarios but becomes
cumbersome very fast if you have a real life project.
I did not find this practical in our situation, so we haven't leveraged it.
I was wondering if there is anyone out there with a practical solution
of how to use the X++ Unit Test Framework in a real world scenario.
Any input is very much appreciated.
We've been using the testing framework as stated above and it has been working for us. the key is to find the correct scenarios to test, also provides a good foundation for writing testable classes.
We're developing alot of enhancements to salesforce using Visualforce and Apex as part of a larger system, as part of our quality metrics we have to provide a report to management on our Code Coverage.
I'd like to get a report similar to the one produced by Run All Tests in the Force.com IDE but in HTML so I can display it easily via a web interface.
For the rest of our system we use Sonar http://www.sonarsource.org/ to produce the reports.
Does anybody know the best approach to this?
I've explored the API documentation but am unable to find out if the Coverage Percentages is stored against the classes so querying that isn't an option.
Any help or pointers would be greatly appreciated.
If you run the Apex tests yourself via the API there are objects returned indicating which lines hasn't been covered by the tests in that run. You can run the tests via either the synchronous or asynchronous methods.
Then you can use the data to create a report in a format that you require. For example, I've used it to create a basic report in the FuseIT SFDC Explorer (Windows based and free). I'm just dumping out line ranges that aren't covered.
You will probably want to run all the tests in one run to get the complete code coverage of all the tests. For example, in the screenshot above I only ran one out of a much greater number of test classes. As a result it looks like the code coverage was much lower than the cumulative tests give. It does however show which lines an individual test class reaches.
I've also heard good things about MavensMate for the Sublime Text Editor. Being open source you should be able to find how it integrates with the testing api and then generates the reports.
I'm writing a parser for the output of a legacy application, and since there are no specs on the file syntax I've got as many samples of these files as I could.
Now I'm writing the unit tests before implementing the parser (because there is no other sane way to do this) but I'm not sure whether I should:
use the real files produced by the application, reading from them and comparing the output with the output that I would store in json format in another file.
or create a sample string with the tokens and possibilities I want to test and a dict (this is python) with the expected output.
I'm inclined to use the second alternative because I would test only what I need to, without all the "real-world" data included on the actual files, but I'm afraid I could forget to test for one possibility or another.
What do you think?
My suggestion is to do both. Write a set of integration tests that run through all the files you have with the expected outputs then unit test with your expected inputs to isolate the parsing logic.
I would recommend writing the integration tests first so you write your parser outside in, it might be disparaging to see a bunch of failing tests, but it'll help you isolate your edge cases earlier.
Btw, I think this is a great question. I recently came across something a similar problem which was transforming large xml feeds from an upstream system into a proprietary format. My solution was to write a set of integration black box tests for the full feeds testing things like record counts and other high level success metrics, then break down inputs into smaller and smaller chunks until I was able to test all the permutations of the data. It was only then that I had a good understanding of how to build the parser.
You should be careful using production data in testing scenarios. It could be a disaster if all your users got an email from a test environment, for example. Its also probably unethical in certain scenarios for developers to have access to prod data, even if there is no way for the users to know this. Think medical, bank, college grades types scenarios.
My answer is you should use data that is close to prod data. If you want to use the actual prod data, you need to scrub it for the above scenarios.
Production data can be a good starting point (assuming it's not sensitive info), since there's a good chance you can't think of all the possible permutations yourself. However, once you get a good working set of data, save it somewhere static, like a file. Then have the tests get it from there instead of dynamically from the production environment. That way you can run the tests with a known set of inputs every time.
The alternative, getting production data on the fly for test inputs, is fraught with problems. Changes in the data could cause a test to pass one time, but fail the next because the inputs changed.
Don't forget to structure the test such that you can add additional possibilities (i.e., regression tests) as they become known.
Using the second solution you offer will allow you to control what is expected and what is returned, which is ideal for unit testing. When creating automated tests, it is best to avoid manual interaction as often as possible - visually scanning the results is one of those practices you should avoid when possible (assuming that's what you meant by "compare").
I am introducing automated integration testing to a mature application that until now has only been manually tested.
The app is Windows based and talks to a MySQL database.
What is the best way (including details of any tools recommended) to keep tests independent of each other in terms of the database transactions that will occur?
(Modifications to the app source for this particular purpose are not an option.)
How are you verifying the results?
If you need to query the DB (and it sounds like you probably do) for results then I agree with Kris K, except I would endeavor to rebuild the DB after every test case, not just every suite.
This helps avoid dangerous interacting tests
As for tools, I would recommend CppUnit. You aren't really doing unit tests, but it shouldn't matter as the xUnit framework should give you the set up and teardown framework you'll need to automatically set up your test fixture
Obviously this can result in slow-running tests, depending on your database size, population etc. You may be able to attach/detach databases rather than dropping/rebuilding.
If you're interested in further research, check out XUnit Test Patterns. It's a fine book and a good website for this kind of thing.
And thanks for automating :)
Nick
You can dump/restore the database for each test suite, etc. Since you are automating this, it may be something in the setup/teardown functionality.
I used to restore the database in the SetUp function of the database related unit test class. This way it was ensured that each test runs under the same conditions.
You may consider to prepare special database content for the tests, i.e. with less data than the current production version (to keep the restore times reasonable).
The best environment for such testing, I believe, is VMWare or an equivalent. Set up your database, transaction log and so on, then record the whole lot - database as well as configuration. Then to re-test, reload the image and database and kick off the tests. This still requires maintenance of the tests as the system changes, but at least the tests are repeatable, which is one of your greatest challenges in integration testing.
For test automation, many people use Perl, but we've found that Perl programs grow like Topsy and become convoluted. The use of Python as a scripting language (we run C++ tests) is worthwhile if you're trying to build a series of structured tests.
As #Kris K. says dumping and restoring the database between each test will probably be the way to go.
Since you are looking at doing testing external to the App I would look to build the testing framework in a language where you can take advantage of better testing tools.
If you built the testing framework in Java you could take advantage of JUnit and potentially even something like FitNesse.
Don't think that just because the application under test is C++ that means you are stuck using C++ for your automated testing.
Please try AnyDbTest, I think it is the very tool you are finding. (www.anydbtest.com).
Features:
1.Writing test case with Xml, not Java/C++/C#/VB code. Not need those expensive programming tools.
2.Supports all popular databases, such as Oracle/SQL Server/My SQL
3.So many kinds of assertion supported, such as StrictEqual, SetEqual, IsSupersetOf, Overlaps, and RecordCountEqual etc. Plus, most of assertions can prefix logic not operator.
4.Allows using an Excel spreadsheet/Xml as the source of the data for the tests. As you know, Excel spreadsheet is to easily create/edit and maintain the test data.
5.Supports sandbox test model, if one test will be done in sandbox, all database operations on each DB will be rolled back meaning any changes will be undone.
6.Allows performing data pump from one database/Excel into target database in testing initialization and finalization phase. This is easy way to prepare the test data for testing.
7.Unique cross-different-type-database testing, which means target and reference result set can come from two databases, even one is SQL Server, another is Oracle.
8.Set style comparison for recordset. AnyDbTest will tell you what is the intersection, or surplus or absence between the two record sets.
9.Sequential style comparison for recordset or scalar values. It means the two result set will be compared in their original sequence.
10.Allow to export result set of SQL statement into Xml/Excel file.