How to link regression unit tests with issue tracker? - unit-testing

I am working on an existing web application (written in .NET) which, surprisingly, has a few bugs in it. We track outstanding in a bug tracker (JIRA, in this case), and have some test libraries in place already (written in NUnit).
What I would like to be able to do is, when closing an issue, to be able to link that issue to the unit-/integration-test that ensures that a regression does not occur, and I would like to be able to expose that information as easily as possible.
There are a few things I can think of off-hand, that can be used in different combinations, depending on how far I want to go:
copy the URL of the issue and paste it as a comment in the test code;
add a Category attribute to the test and name it Regressions, so I can select regression tests explicitly and run them as a group (but how to automatically report on which issues have failed regression testing?);
make the issue number part of the test case name;
create a custom Regression attribute that takes the URI of the issue as a required parameter;
create a new custom field in the issue tracker to store the name (or path) of the regression test(s);
The ideal scenario for me would be that I can look at the issue tracker and see which issues have been closed with regression tests in place (a gold star to that developer!), and to look at the test reports and see which issues are failing regression tests.
Has anyone come across, or come up with, a good solution to this?

i fail to see what makes regression tests different from any other tests. why would you want to run only regression tests or everything except regression tests. if regression or non-regression test fails that means this specific functionality is not working and product owner has to decide how critical the problem is. if you stop differentiate tests then simply do code reviews and don't allow any commits without tests.
in case we want to see what tests have been added for specific issues we go to issue tracking system. there are all commits, ideally only one (thanks squashing), issue tracker is connected to git so we can easily browse changed files
in case we want to see (for whatever reason) what issue is related to some specific test/line of code we just give tests a meaningful business name which helps finding any related information. if problem is more technical and we know may need specific issue then we simply add issue number in comment. in case you want to automate retrieval just standardize the format of the comment
another helpful technique is to structure your program correctly. if every functionality has it's own package and tests are packaged and named meaningfully then it's also easy ro find any related code

Related

Any suggestions on software development along the lines of quality assurance?

SUMMARY
The software I am working is an engineering analysis tool for renewable energy systems. It tests power outputs and things of this nature.
My current job task has been to test different configurations of said systems (i.e. use a evaporative cooling condenser instead of the default radiative cooling condenser via a drop down menu on the user interface) to ensure that the release version of this software functions across all spectrums as it should.
So far the process has been very simple to test: changing one factor of the base model and testing if the code functions properly as it should via the outputs generated.
How I need to pivot
But now we want to start testing combinations of tests. This change of workflow will be very tedious and time consuming for we are considering a factorial design approach in order to map out my plan of attack for what configurations to test, to ensure that all the code functions properly when changing multiple things. This could create many different types of configurations I will need to manually test.
All in all, does anyone have any suggestions for an alternative approach? I've been reading up on alternative software testing methods, but am not entirely sure if things like: regression testing, smoke testing, or sanity checks are better for this scenario or not.
Thanks!
EDIT
The software I am testing is being used on Visual Studios where I am utilizing the Google Test framework to manually test my system configurations.
Currently, each test that I create for a certain concentrated solar power system configuration demands that I manually find the difference in code (via WinMerge) between the default configuration (no changes made) to the alternative configuration. I use the code differences in Google Test Framework to simulate what that alternative config. should output by testing it against the accepted output values.
It's only going to get more complicated, with an aspect of manual user interface needed ... or so it seems to me.
How can I automate such a testing suite, when I'm needed to do so much back end work?
As per what I understand, to avoid the manual effort of testing too many combinations, an automation testing tool is needed in this case. If the software that you are testing is browser based, then Selenium is a good candidate. But if the tool is run as an application on Windows or Mac, then some other automation testing tool that supports Win/Mac apps would be needed. The idea is to create test suites with the different combinations and set the expected results for one time. Once the suite is ready, it can be run after any change to the software to verify that all the combinations work as expected without doing any manual work. However there would be an effort involved to create the test suite in the first place and then maintain it if the new scenarios occur or the expected results need to be modified.
It would be a pain to test all the many combinations manually each time, automation testing can surely ease that.

store results of automatic tests and show results in a web UI

I'm looking for a piece (or a set) of software that allows to store the outcome (ok/failed) of an automatic test and additional information (the test protocol to see the exact reason for a failure and the device state at the end of a test run as a compressed archive). The results should be accessible via a web UI.
I don't need fancy pie charts or colored graphs. A simple table is enough. However, the user should be able to filter for specific test runs and/or specific tests. The test runs should have a sane name (like the version of the software that was tested, not just some number).
Currently the build system includes unit tests based on cmake/ctest whose results should be included. Furthermore, integration testing will be done in the future, where the actual tests will run on embedded hardware controlled via network by a shell script or similar. The format of the test results is therefore flexible and could be something like subunit or TAP, if that helps.
I have played around with Jenkins, which is said to be great for automatic tests, but the plugins I tried to make that work don't seem to interact well. To be specific: the test results analyzer plugin doesn't show tests imported with the TAP plugin, and the names of the test runs are just a meaningless build number, although I used the Job Name Setter plugin to set a sensible job name. The filtering options are limited, too.
My somewhat uneducated guess is that I'll stumple about similar issues if I try other random tools of the same class like Jenkins.
Is anyone aware of a solution for my described testing scenario? Lightweight/open source software is preferred.

Automated testing in RPG (or other ILE languages)

we have quite a lot of RPG-programs here, and we do a lot of automated testing, but we are not very good yet in combining those two. Are there good ways to do automated testing on RPG programs -- or on any other ILE programs for that matter?
I am aware of a project named RPGUnit, but that has it's last update in 2007. However, it seems it is still used, since RPG Next Gen is currently putting some work in including it.
What's you experience with those? Is there something else, that I am missing, like some great sofware tool google fails to find?
I'm interested in unit testing as well as integration testing of complete projects. Anything that integrates with tools like jenkins is welcome. If it involves IBM's Rational Developer or System i Navigator, that's okay, too.
We are in an early phase of creating new testing plans for our RPG development process, and I don't want it, to head in the wrong direction right from the start.
You probably already know how broad a subject 'testing' can be. IBM have a product called Rational Function Tester (I haven't used it) http://www-01.ibm.com/software/awdtools/tester/functional/ I myself use RPGUnit. No, it hasn't been updated recently but it still has all the pieces needed for testing subprocedures in the same way one would test Java methods.
Frankly, that's the easy part. The hard part is creating a test database and keeping it current enough to be representative of the production database. Rodin have some database tooling but I haven't the budget for those, so I roll my own more or less by hand. I use many SQL statements in a CL program to extract production data so I can maintain referential integrity. Then I use some more SQL to add my exceptional test cases - those relationships which aren't present in the production data but need to be tested for. Then I make a complete copy of the test database as a reference point. Then I run my test cases, which will update the test database. I've written a home grown CMPPFM utility that will allow me to compare the reference database against the now modified test database. This will show changes, but it still needs a lot of manual labour to review the comparisons to ensure that the proper rows got the proper updates. I haven't gone the extra mile to automate that yet. One big caveat is there are some columns you don't care about, like a change timestamp.
We went with RPGUNIT, and found it a good base to work from, but ended up extending it a lot to tie in with our Change Management system, and the way we work. I've written about the things we tried here: http://www.littlebluemonkey.com/blog/my-rpg-unit-test-journey

"Unit testing" a report

How would you "unit test" a report created by some report engine like Crystal Reports or SQL Server Reporting Services?
The problem with reports is akin to the problem with GUIs.
If the Report/GUI has lot of (misplaced) intelligence it is going to make testing difficult.
The solution then is to
Separated Presentation : Separate presentation from content (data-access/domain/business rules). In the current context would mean, that you create some sort of ViewModel class that mirrors the content of the final report (e.g. if you have order details and line items in your report, this class should have properties for the details and a list of line item objects). The ViewModel is infinitely simpler to test. The last-mile, applying presentation to the content should be relatively trivial (thin UI).
e.g. if you use xslt to render your reports, you can test the data xml using tools like XmlUnit or string compare. You can reasonable confident in xsl transformations on the data xml for the final report... Also any bugs in here would be trivial to fix.
However if you're using third party vendors like Crystal Reports, you have no control / access to hook in to the report generation. In such cases, the best you can do is generate representative/expected output files (e.g. pdfs) called Golden Files. Use this as a read-only resource in your tests to compare the actual output. However this approach is very fragile.. in that any substantial change to the report generation code might render all previous Golden Files incorrect. So they would have to be regenerated. If the cost to benefit ratio of automation is too high, I'd say Go manual with old-school word doc test plans.
For testing our own Java-based reporting product, i-net Clear Reports, we run a whole slew of test reports once, exporting them to various export formats, make sure the output is as desired, and then continously have these same reports run daily, comparing the results to the original data. Any differences then show up as test failures.
It has worked pretty well for us. Disadvantage of this is any minor differences that might not make any difference show up as test failures until the test data is reset.
Side note: this isn't exactly a unit test but rather an acceptance test. But I don't see how you could truly "unit test" an entire report.
The best I can think of, is comparing the results to an expected output.
Maybe some intelligence can be added, but it is not that easy to test these big blocks.
I agree with Gamecat.
Generate the report from fixed (constant) data, and compare it to the expected output for that data.
After that you might be able to use simple tests such as diff (checking if the files are identical)
My current idea is to create tests at two levels:
Unit tests: Structure the report to enable testing using some ideas for testing a UI, like Humble View. The report itself will be made as dumb as possible. It should consist mostly of just simple field bindings. The data items/objects that act as the source of these bindings can then be unit tested.
Acceptence tests: Generate some example reports. Verify them by hand first. Then setup an automated test that does a compare using diff.

What's the Point of Selenium?

Ok, maybe I'm missing something, but I really don't see the point of Selenium. What is the point of opening the browser using code, clicking buttons using code, and checking for text using code? I read the website and I see how in theory it would be good to automatically unit test your web applications, but in the end doesn't it just take much more time to write all this code rather than just clicking around and visually verifying things work?
I don't get it...
It allows you to write functional tests in your "unit" testing framework (the issue is the naming of the later).
When you are testing your application through the browser you are usually testing the system fully integrated. Consider you already have to test your changes before committing them (smoke tests), you don't want to test it manually over and over.
Something really nice, is that you can automate your smoke tests, and QA can augment those. Pretty effective, as it reduces duplication of efforts and gets the whole team closer.
Ps as any practice that you are using the first time it has a learning curve, so it usually takes longer the first times. I also suggest you look at the Page Object pattern, it helps on keeping the tests clean.
Update 1: Notice that the tests will also run javascript on the pages, which helps testing highly dynamic pages. Also note that you can run it with different browsers, so you can check cross-browser issues(at least on the functional side, as you still need to check the visual).
Also note that as the amount of pages covered by tests builds up, you can create tests with complete cycles of interactions quickly. Using the Page Object pattern they look like:
LastPage aPage = somePage
.SomeAction()
.AnotherActionWithParams("somevalue")
//... other actions
.AnotherOneThatKeepsYouOnthePage();
// add some asserts using methods that give you info
// on LastPage (or that check the info is there).
// you can of course break the statements to add additional
// asserts on the multi-steps story.
It is important to understand that you go gradual about this. If it is an already built system, you add tests for features/changes you are working on. Adding more and more coverage along the way. Going manual instead, usually hides what you missed to test, so if you made a change that affects every single page and you will check a subset (as time doesn't allows), you know which ones you actually tested and QA can work from there (hopefully by adding even more tests).
This is a common thing that is said about unit testing in general. "I need to write twice as much code for testing?" The same principles apply here. The payoff is the ability to change your code and know that you aren't breaking anything.
Because you can repeat the SAME test over and over again.
If your application is even 50+ pages and you need to do frequent builds and test it against X number of major browsers it makes a lot of sense.
Imagine you have 50 pages, all with 10 links each, and some with multi-stage forms that require you to go through the forms, putting in about 100 different sets of information to verify that they work properly with all credit card numbers, all addresses in all countries, etc.
That's virtually impossible to test manually. It becomes so prone to human error that you can't guarantee the testing was done right, never mind what the testing proved about the thing being tested.
Moreover, if you follow a modern development model, with many developers all working on the same site in a disconnected, distributed fashion (some working on the site from their laptop while on a plane, for instance), then the human testers won't even be able to access it, much less have the patience to re-test every time a single developer tries something new.
On any decent size of website, tests HAVE to be automated.
The point is the same as for any kind of automated testing: writing the code may take more time than "just clicking around and visually verifying things work", maybe 10 or even 50 times more.
But any nontrivial application will have to be tested far more than 50 times eventually, and manual tests are an annoying chore that will likely be omitted or done shoddily under pressure, which results in bugs remaining undiscovered until just bfore (or after) important deadlines, which results in stressful all-night coding sessions or even outright monetary loss due to contract penalties.
Selenium (along with similar tools, like Watir) lets you run tests against the user interface of your Web app in ways that computers are good at: thousands of times overnight, or within seconds after every source checkin. (Note that there are plenty of other UI testing pieces that humans are much better at, such as noticing that some odd thing not directly related to the test is amiss.)
There are other ways to involve the whole stack of your app by looking at the generated HTML rather than launching a browser to render it, such as Webrat and Mechanize. Most of these don't have a way to interact with JavaScript-heavy UIs; Selenium has you somewhat covered here.
Selenium will record and re-run all of the manual clicking and typing you do to test your web application. Over and over.
Over time studies of myself have shown me that I tend to do fewer tests and start skipping some, or forgetting about them.
Selenium will instead take each test, run it, if it doesn't return what you expect it, it can let you know.
There is an upfront cost of time to record all these tests. I would recommend it like unit tests -- if you don't have it already, start using it with the most complex, touchy, or most updated parts of your code.
And if you save those tests as JUnit classes you can rerun them at your leisure, as part of your automated build, or in a poor man's load test using JMeter.
In a past job we used to unit test our web-app. If the web-app changes its look the tests don't need to be re-written. Record-and-replay type tests would all need to be re-done.
Why do you need Selenium? Because testers are human beings. They go home every day, can't always work weekends, take sickies, take public holidays, go on vacation every now and then, get bored doing repetitive tasks and can't always rely on them being around when you need them.
I'm not saying you should get rid of testers, but an automated UI testing tool complements system testers.
The point is the ability to automate what was before a manual and time consuming test. Yes, it takes time to write the tests, but once written, they can be run as often as the team wishes. Each time they are run, they are verifying that behavior of the web application is consistent. Selenium is not a perfect product, but it is very good at automating realistic user interaction with a browser.
If you do not like the Selenium approach, you can try HtmlUnit, I find it more useful and easy to integrate into existing unit tests.
For applications with rich web interfaces (like many GWT projects) Selenium/Windmill/WebDriver/etc is the way to create acceptance tests. In case of GWT/GXT, the final user interface code is in JavaScript so creating acceptance tests using normal junit test cases is basically out of question. With Selenium you can create test scenarios matching real user actions and expected results.
Based on my experience with Selenium it can reveal bugs in the application logic and user interface (in case your test cases are well written). Dealing with AJAX front ends requires some extra effort but it is still feasible.
I use it to test multi page forms as this takes the burden out of typing the same thing over and over again. And having the ability to check if certain elements are present is great. Again, using the form as an example your final selenium test could check if something like say "Thanks Mr. Rogers for ordering..." appears at the end of the ordering process.