Getting a custom built financial software system under test automation - unit-testing

I am needing to get a financials heavy software system under test automation. It has been in development and production for a couple years. I've been an advocate for developers building testing test automation into their coding practices for over 18 years, but this is a unique challenge. Beyond establishing test automation standards and training for the technologies we are using, I need to figure out the most effective strategy for prioritizing and implementing automated tests into the existing system that will improve confidence/stability/reliability, while continuing with rapid development of future features. There are many teams working on this solution.
My primary decision point is whether I should start the focus on internals of the financial transactions, processes, rules - with unit and integration tests under the hood - or start with use cases that tell full stories. There are two things that are competing for priority, making sure requirements have been implemented correctly vs. making sure what has already been implemented does NOT get broken by new code merges. I realize that these are not mutually exclusive, but time is of the essence and I need to move yesterday on implementing the highest value tests.
I would love to discuss with others experienced with test automation in systems that are heavy on financial transactions that are CRITICAL.

Since you will never catch up with the tests you should focus on new features and fixes. Whenever the code will be changed, try to identify which parts of the existing code will be touched and see if you can improve the test coverage for this parts before writing any new code. Do not aim for perfection, just try to get some tests running that would catch any serious regressions. Then try to TDD the new feature/fix.
Also check out "Working Effectively with Legacy Code" by Michael Feathers.

Related

Increasing testability, when coding with Bold for Delphi framework

Background
I work in a team of 7 developers and 2 testers that work on a logistics system.
We use Delphi 2007 and modeldriven development with Bold for Delphi as framework.
The system has been in production about 7 years now and has about 1,7 millions lines of code.
We release to production after 4-5 weeks and after almost every release we have to do some patches for bugs we don’t found. This is of course irritating both for us and the customers.
Current testing
The solution is of course more automatic testing. Currently we have manual testing. A Testdbgenerator that starts with an empty database and add data from the modelled methods. We also have Testcomplete that runs some very basic scripts for testing the GUI. Lack of time stop us from add more tests, but scripts is also sensitive for changes in the application. For some years ago I really tried unit testing with DUnit, but I gave up after some days. The units have too strong connections.
Unit testing preconditions
I think I know some preconditions for unit testing:
Write small methods that do one thing, but do it well.
Don’t repeat yourself.
First write the test that fails, then write the code so the test pass.
The connections between units shold be loose. They should not know much about each other.
Use dependency injection.
Framework to use
We may upgrade to Delphi XE2, mainly because of the 64-bit compiler.
I have looked at Spring a bit but this require an update from D2007 and that will not happen now. Maybe next year.
The question
Most code is still not tested automatically. So what is the best path to go for increasing testability of old code ? Or maybe it is best to start writing tests for new methods only ?
I’m not sure what is the best way to increase automatic testing and comments about it is welcome. Can we use D2007 + DUnit now and then easily change to Delphi XE2 + Spring later ?
EDIT: About current test methodology for manual testing is just "pound on it and try to break it" as Chris call it.
You want the book by Michael Feathers, Working Effectively with Legacy Code. It shows how to introduce (unit) tests to code that wasn't written with testability in mind.
Some of the chapters are named for excuses a developer might give for why testing old code is hard, and they contain case studies and suggested ways to address each problem:
I don't have much time and I have to change it
I can't run this method in a test harness
This class is too big and I don't want it to get any bigger
I need to change a monster method and I can't write tests for it.
It also covers many techniques for breaking dependencies; some might be new to you, and some you might already know but just haven't thought to use yet.
The requirements for automated unit testing are exactly this:
use an unit testing framework (for example, DUnit).
use some kind of mocking framework.
Item 2 is the tough one.
DRY, small methods, start with a test, and DI are all sugar. First you need to start unit testing. Add DRY, etc. as you go along. Reduced coupling helps to make stuff more easily unit tested, but without a giant refactoring effort, you will never reduce coupling in your existing code base.
Consider writing tests for stuff that is new and stuff that is changed in the release. Over time you will get a reasonable base of unit tests. New and changes stuff can also be refactored (or written nicely).
Also, consider an automated build process that runs unit tests and sends email when the build breaks.
This only covers unit testing. For QA testers, you will need a tool (they exist, but I can't think of any) that allows them to run automated tests (which are not unit tests).
Your testing team is too small, IMO. I've worked in teams where the QA dept outnumbers the Developers. Consider working in "sprints" of manageable chunks (features, fixes) that fit in smaller cycles. "Agile" would encourage 2-week sprints, but that may be too tight. Anyway, it would keep the QA constantly busy, working farther ahead of the release window. Right now, I suspect that they are idle until you give them a huge amount of code, then they're swamped. With shorter release cycles, you could keep more testers busy.
Also, you didn't say much about their testing methodology. Do they have standard scripts that they run, where they verify appearance and behavior against expected appearance and behavior? Or do they just "pound on it and try to break it"?
IMO, Dunit testing is hard to do with lots of dependencies like databases, communication, etc.. But it's do-able. I've created DUnit classes that automatically run database setup scripts (look for a .sql file with the same name as the class being tested, run the sql, then the test proceeds), and it's been very effective. For SOAP communications, I have a SoapUI mockservice running that returns canned results, so I can test my communications.
It does take work, but it's worth it.

Testing... how do the pros do it, and what techniques can scale to single-person development?

I've been writing software for years, but have never mastered the art of testing. My typical testing includes thorough run-throughs on my machines, and then testing in various operating systems via VMware. Mainly a brute-force play-with-it-until-it-breaks-or-doesn't approach. Where possible I work on actual hardware, but this isn't always practical.
My question is twofold:
How do medium-sized professional development houses do their testing?
What common techniques or procedures (outside of unit testing) can apply to a developer team of one. I'm looking for practicality.
Thank you for your time and input.
Step 1: Unit Testing
Divide your software into components (which can be anything from single functions up to whole programs) and unit-test those components thoroughly, especially as relates to the API and behavior that the rest of the application can see. (Don't forget to check for failure modes too, but beware of binding too carefully to the exact nature of a failure; it's often good enough to just test for the presence of the right class of exception rather than its exact message.) Make sure those tests pass; you're honing them in on a specification of what the component should be doing. (Automated test running helps here, as does a CI system.) This is important because of…
Step 2: Integration Testing
Test that the compositions of components that make up the application work (this is integration testing). Ideally you'll only be finding bugs in the specifications of things at this point (hah!) and wherever you identify that a component is wrong despite passing its unit tests, that tells you that there is a bug. Whenever things fail to work together despite being told to do so, you've probably got a bug in your specs from the previous step so you typically resolve these things by adding more detail to your unit tests and fixing the components until they work.
Note that to make good integration, you want to keep this stage so that the integration itself is sufficiently simple that it is in the “Obviously No Bugs” class of programs instead of the larger “No Obvious Bugs” class. An integration framework like Spring or a scripting language can help a lot here (though with the latter you have to guard against creating components on the sly; if you create a component then admit it and make sure it has a proper usage contract and unit tests to ensure that it meets its contract).
Where you can, you can make components by composing others together; these higher-level components need to be unit tested as characterized in Step 1 above. This might sound like extra work – it probably is – but it does have the advantage of meaning that you can use automated tests for larger parts of the program. (Alas, it's harder to do all integration tests with an automated test tool; such things tend to work better doing unit tests where you can mock out all the irrelevant parts.) But this doesn't save you from…
Step 3: Acceptance Testing
This is where the overall application is tested to see if it actually does what is desired. This might be automatable, but usually isn't. This is level where you bring in users to let them see whether things are what they expected, though you might want to use internal testers a bit first. How easy this all is depends on the nature of the application.
Note also that user interfaces tend to spend more time in this step than the others, precisely because what makes for a good UI is difficult to impossible to pin down in algorithms (it relates much more to human psychology after all).
A final note: What I've written here sounds like testing is meant to be a laborious process that takes ages at the end of a project. It isn't! You can often get parts of an application done before others, do an integration of those parts (with mocks for the other bits) and test quite a bit of how acceptable this sub-application really is. Of course, when doing this take care to stop users from believing that everything is done; one way is to have dialog boxes that pop up and say things like “magic to happen here”. Silly but effective. :-)
For a small team unit tests or automatic integration tests are crucial. Because there are no hands and time for manual testing - the more you automate the better. This includes Continuous Integration.
Set up a separate 'beta' environment that is as close to your production environment as possible. Do most of your tests there - this way you will pick up all the things you've forgotten in your 'release plan'.
As a proffesional tester my suggestion is that you should have a healthy mix of automated and manual testing. The Examples below are in .net but it should be easy to find a tool for whatever technique you are using.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
My testing tool examples are Java based, but I will try to suggest tools which are ported to multiple languages or are language agnostic.
Use unit testing tools like junit (ported to a variety of languages). This will allow you to refactor your code safely. Most code bugs should result in the addition or correction of at least one test.
Use revision control, and setup an automated build environment that check out the code and builds the code. It should then run the automated test suite. If the application uses a database the build environment should have its own database. Use different code branched for production (released) and development code.
Use integration testing tools like HTTPunit or Synergy to test web applications. Tools of this type are basically language agnostic, but your may want to choose a tool which can be extended in the language(s) you are using. For non-web applications, there may not be an equivelent tool for your platform. You may also want to use a performance tool like JMeter.
These tools have some setup costs, but a quick payback. Overall development time may be the same or less than if you didn't use the tools.
Acceptance tests generally don't lend themselves to automated testing. Where they do, included them in the integrated testing. Get acceptance feedback as early and often as possible.
How do the pros do it? That all depends on who the 'Pro' is... There are dozens of different approaches to testing, and plenty of experts to tell you that their way is the one true way. Agile gurus will tell you a very different story from the waterfall gurus. The ISTBQ guys will tell you a very different from the Context-Driven guys.
Unfortunately there is no one true way, and you have to figure it out for yourself. Your approach to testing depends on too many factors. That's probably not very helpful, but you just need to be aware that any answer you get here will be only one option of many, and it may be completely inappropriate for your situation.
Personally, after several years in software testing, I have decided to align myself with the Context Driven school of software testing. See: http://www.context-driven-testing.com
Secondly, from your description of you current approach, that sounds a lot like exploratory testing to me. You may find this material interesting: satisfice.com/sbtm/
One thing you can do (combined with all the previous suggestions) is identify the risky and critical areas of your app and try to focus your testing efforts on these areas.

Is Unit Testing Suitable for BPM Development?

I'm currently working on a large BPM project at work which uses the Global 360 BPM tool-set called Process 360. Just to give some background; this product works like a lot of other BPM solutions in that you design multiple "process maps" which define the flow of a particular business process you're trying to model, and each process map consists of multiple task nodes connected together which perform particular functions (calling web-services etc).
Currently we're experiencing some pretty serious issues during QA phases of our releases because there isn't any way provided by the tool-set to automate testing of the process map routes. So when a large and complex process is developed and handed over to our test team there are often a large number of issues which crop up. While obviously you'd expect some issues to come out of QA, I can't help the feeling that a lot of the bugs etc could have been spotted during development if we had some sort of automated testing framework which we could use to build up a set of unit tests which proved the various routes in the process map(s).
At the moment the only real development testing that occurs is more akin to functional testing performed by the developers which is documented as a set of manual steps per test-case. The problem with this approach is that it's very time consuming for the developers to run manually, and because of this, is also relatively prone to error. Also; because we're usually on a pretty tight schedule, the tests are often not executed often enough to spot issues early.
As I mentioned earlier; there isn't a way provided by the current tool-set to perform this sort of automated testing. Which actually got me thinking why? Being very new to the whole BPM scene my assumption was that this was just a feature lacking in the product, but I also wonder whether "unit testing" just isn't done in the BPM world traditionally? Perhaps it just isn't suited well to this sort of work?
I'd be interested to know if anyone else has ever encountered these sorts of issues, and also what - if anything - can be done to improve things.
I have seen something about that, though not Global 360 related : using bpelunit for testing processes
I develop a workflow tool and there is an increased demand for opening the test tools used for testing the engine to end-users.
I've done "unit" testing with K2.net 2003, another commercial BPM. I'd really call this integration testing, because it requires a test server and it's relatively slow. However, it is automated.
There's a good discussion of this in the book Professional K2 blackpearl (it applies to K2.net 2003 as well).
In order to apply it to your platform, the tool set has to have an API that permits starting process instances, obtaining work items, completing work items, etc. You write tests using any supported language (I used C#) and a testing framework (I used NUnit). If the API supports synchronous calls, this is easier to do. For each test:
Start the process under test
Progress the work item to a decision point
Set process instance data appropriately
Complete the work item
Assert that the work item is now at the expected activity
Delete or complete the process instance
Base test classes or helper methods can make this easier. You could even write a DSL for testing maps.
Essentially you want full "test coverage" of the process/map - test every decision point and insure that the correct branch is taken.
There are two aspects of BPM that are related yet not identical.
There's BPM that tool and technology vendors advocate for which is all about features.
There's also BPM that Enterprise Architects advocate for which is all about establishing Centers of Excellence.
The former is where a company buys some software.
The latter is where a company makes systemic and inherent changes to the behavior of their IT workers.
The former is supposed to be in the service of the latter but that isn't necessarily so. Acquiring the former is necessary but not sufficient for achieving the latter.
I don't know just how well that Global 360 supports what is known as Test Driven Development but JBoss jBPM does provide some tool support for easily writing JUnit tests.
However, the tool can't and won't force developers to write them or to embrace TDD principles.

Is Scrum possible without test driven development? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have now witnessed two companies move to agile development with Scrum.
In both cases the standard of coding was good enough when each part of the application was only being worked on by one or two developers with the developers spending a reasonable amount of time working on one part of the application before moving to the next task. The defect rates were also reasonable.
However with Scrum the developers are expected:
to all be able to work on all the bits of the application.
to only work on one area of the application for a few days at most before moving to the next area
to mostly work on code they did not write
Code qualities became an issue in both of the Scrum projects.
So is there a way to do Scrum that does not lead to these problems, without first getting all the developers to do test driven development?
Have you seen Scrum work well on a large project without test driven development? (If so how?)
I'd like to expand on what Dan said.
It's a very common misconception that Scrum / Agile dictates software engineering principles. This is a fallacy for many reasons. As Dan mentioned, Scrum is a software management process, NOT a software engineering process. That being said, very often you will see many engineering principles associated with Scrum; methodologies such as TDD, XP, etc tend to complement the management methodology that Scrum promotes, but are not required.
The reason that CI, TDD, and other engineering practices are so often found hand-in-hand with Scrum is that in general, many are good practices to follow no matter what management methodology is used.
I'd like to address a couple other fallacies in your OP:
However with Scrum the developers are expected:
* to all be able to work on all the bits of the application.
* to only work on one area of the application for a few days at most before
moving to the next area
* to mostly work on code they did not write
As mentioned above, Scrum doesn't dictate what type or kind of work a developer works on. The developers themselves decide on what work to commit themselves to; if a database-heavy dev wants to only work on the DAL and associated stories, there's no reason that they cannot.
Again, Scrum doesn't dictate anything about how to build the application, so your second point is moot (see point 1).
This is a fallacy, since there is nothing that says a developer should only work on code that isn't theirs, or anything about how a developer should develop. If a developer on a Scrum team finds his/herself only working on others' code, that would be coincidental, not because of the scrum process itself.
See this question/answer for more information on the qualities generally expected in a developer working Scrum.
Yes, Scrum describes the software management approach. The program and project management paradigm should not dictate whether or not you use test driven development.
TDD is a software development practice or technique and although it works well with Scrum I don't think it will make or break your success with the practice.
I have personally seen Scrum work well on medium sized projects without a test driven approach to development. That is not to say we didn't write automated tests, they just were not always written first.
Regarless of the use of Scrum, what you were seeing was a change from a code ownership approach to a communal code approach. In order for that to work, there has to be a process change which supports it. One such possibility is TDD. There are others (Automated unit testing even if doesn't drive design coupled with Code Reviews, strong design communication, greater design up front, not developing on code without first pairing with the original author on the code, and more I'm sure you could think of).
Communal approaches work in smaller communities (in large ones it can degenerate into a tragedy of the commons) with a high sense of cohesion between the members.
We do Scrum at work, but we don't practice TDD. Nothing in the Scrum "guidelines" tells you that you have to use TDD. In fact, most agile practices are merely a recommendation insofar that they have proven to work well in agile environments (or even non-agile ones) without them being a must if you want to implement Scrum.
We do write lots and lots of unit and integration tests to avoid extensive manual testing and ensure that later changes in the code doesn't lead to any unpredicted side effects. But that's not TDD. It's mostly a sensible approach to ensure good code and software quality.
Please note that not implementing TDD was not an "active" decision, it just happened. We are encouraged to "write tests first", e.g. when fixing a bug, so it's kind of a voluntary situation-driven not-obligatory way of getting a feeling for TDD by applying it on a regular basis, but it is not mandatory.
Like others said: Scrum is a framework which can hold whatever practices you want to enforce in your development team. Some agile practices come naturally, because they generally make sense, but which ones you want to use and which ones you don't want is up to you.
Yes, Scrum is entirely possible and in most cases implemented without utilizing a TDD approach.
However, the flexibility that TDD provides is certainly something that a Scrum methodology can benefit from.
The main project I work on uses a Scrum approach but the embedded nature of our project makes test-driven development (the way that most people do it) impractical.
I think the problem you encountered was that the expectations of the programmers changed, not that the management process went to a Scrum-style system. If programmers are constantly being shuffled around to parts of the code they are not familiar with, investigate the process behind how tasks are being delegated relative to the old method. Are tasks being assigned to the developer who knows that area the best or are they going to the developer with the shortest to-do list? Is there a long backlog of to-do items for one part of the code and a scarcity of to-do items for another part? If you want to keep developers focused on the areas they excel in, then project management will want to adjust sprint lengths and task priorities to make sure that the workload can be distributed as desired and still be feasible given the time constraints of the sprint.
The Scrum framework is pretty small, it defines a few meetings, ideal lengths of iteration, responsibilities of product owner, scrum master... and maybe a bit more.
However, once we have started our iteration there is nothing in Scrum that dictates how and when a developer(s) should develop something. There is only the committment it will be 'done' by the end of the sprint.
Scrum is about the team committing to produce results, and the team being empowered to decide how to do so. If that means 1 dev per user story, great. If that means 3 devs per user story thats fine also. Whatever way the scrum team believe to be the best way to do the job is what should happen.
To answer the question, yes Scrum is possible without test driven development.
TDD is a worthwhile/recommended practice but the results will vary depending on team/context. For example was TDD in place at the start of a project or are you trying to inject the methodology in at a later date.
There's some confusion about Scrum here.
Scrum per se doesn't tell you how/when to do technical things like TDD (that's a forever moving target). Scrum tells you how/when to manage the people things that happen on a project. It is a overall project management technique, not a construction management technique.
If your manager wishes to do the three things listed above during your sprints, that's fine, but that isn't part of the framework of Scrum. Those are for construction management, which isn't Scrum. It may be used in your Scrum-framework-bounded project, but it isn't in the official Scrum framework.
I think it's easy to be confused about this, though. Agile techniques like Scrum are usually evangelized by people on the 'net who are all about pushing buzzwords and 'happy shiny' things, and as such don't always understand/communicate well. At least, that's how Agile techniques were introduced to me, by an Agile apologist. It took me a good 6 months before I got past the hype / confusing terminology and figured out what they were talking about.
While there are parts I agree to above, I truly believe that you cannot deliver small increments of Code (Scrum for software) without Testing period.
How do you know your sprint didn't break the last 4? How can you guarantee deliverables if you don't know that you didn't break anything in the past.
While I do agree SCRUM is a management process and that TDD is a software process. You must have some way to be able to verify that you did not move backwards.
SCRUM teaches you that daily deliverables and always moving forward at the expense of speed.
When someone says I want to do CI or Agile or Scrum, to me this automatically means there needs to be unit testing (not integration testing as mentioned above) but rather each individual moving part has it's own unit tested.
If you do an integration test you are not testing each individual part in a self contained way. Therefore all you prove is that the method called in one flow works rather than each possible branch you would see in the MSIL

Getting started in Unit Testing as a group in these economic times

We have a group of a few developers and some business analysts. We as developers would like to start adding unit testing as part of our coding practices so that we can deliver maintainable and extensible code, especially since we will also be the ones supporting and enhancing the application in the future. But in this economic downturn we are struggling with the push to get started because we are challenged to just deliver solutions as fast as possible, with quality not being the top priority. What can we do or say to show that we will be able to deliver faster and with higher quality, as well as preparing for future enhancements.
Basically we just need to get over the learning curve of incorporating unit testing into our daily work, but we cannot do that now because it is viewed as an unnecessary overhead that would delay our projects that the business needs now.
We as developers want to provide the highest value to the business, especially quickly, but we know that we will also need to do this 6 months from now and we need to plan for that as well, and we believe that unit testing will help us greatly down the line.
EDIT
All awesome input, thank you. I personally know how to write unit test, but I don't have the experience in me to say whether or not that unit test is good. I have just ordered Test Driven Development: By Example and will take the initiative to get the ball rolling on incorporating unit testing in our group.
You need to just start doing it, with or without permission. In the end it will make you more productive and increase your code quality. You can start small by including units for something critical and once you've shown the benefits, you're in.
Start unit testing the functions or classes when you create them. Begin with simple classes/functions that do not have external dependencies (DB, file system).
Share your progress inside the team. Count the number of tests and display a big chart showing your progress (unless the management/analysts are very hostile against unit testing).
Read about TDD: "Test-Driven Development : by example". Writing the tests first leads to code that can be easily tested. If you write the production code first, you may have hard time putting it under tests.
I would like to recommend the book
Pragmatic Software Testing
The book Pragmatic Unit Testing is also a good book, although it focuses more on C# and NUnit, there are topics which are applicable language independent
Just do it...
If it's part of your personal process for any new code, you'll at least have that covered. You'll probably never get permission to add unit tests to cover all your code ever but you can at least be sure that no future changes undo work from a given point in time.
Think of it from the business side, why do you need to write code to prove the code you've already written is correct? Why is it wrong?
Getting 100% coverage would be nice but take your time getting there and don't write tests for existing code that isn't currently wrong; write the tests as it breaks so you at least never undo something unintentionally.
You don't even need to discuss it with management (though the situation you describe about it is far from ideal). It's like Design by Contract - I introduced it in previous code I worked in, just as part of the development process. Once it's in, and it works, trust me, noone will dare to remove it.
Unit testing can also be seen as a part of development. Hence, if you have "20 days" for developing features A, B and C, you can typically include unit tests in your estimations for development itself.
Making unit tests is surprisingly easy. Compared with multithreading problems or any sort of complex design, unit testing is very easy to grasp for any competent developer.
You can read good literature about it (you have dozens of tutorials online) in, say, half a day, and start doing your first tests in the afternoon.
Really - just do it!
I know this may not be readily available to you, but I believe getting someone who's test infected on your team to show you the way is the easiest way to maintain productivity when introducing testing. Your people obviously need to know the concepts, which they can do by reading books or attending user groups. A test infected developer can show you how to do it in your practical everyday work with an absolute minimal loss in productivity whilst doing so. It wouldn't surprise me if your speed increased immediately, but I wouldn't make any claims of that sort. With test experience also comes knowledge of testable designs, which I think is key.