Convert project without introducing bugs - c++

I have the C++ code of a exe which contains a UI and some process.
My goal is to separate the UI from the process and to convert the exe into a dll.
In order to do that, I am thinking of generating unit test before touching any code and then to do my modification and make sure the tests are not failing.
The problem is that I am not sure if this is the best approach and if it is, is there a way to automatically generate unit test.
BTW, I am using VS 2012.
Do you have any guidance for me?

It's relatively hard to write meaningful unit tests for GUIs. There are frameworks like FrogLogic's Squish that make GUI testing relatively easy, but most often, these tools are not free.
Note also that it is no small feat to write unit tests "after the fact", as the original code might already have to be changed to make it testable.

As far as I know, there are no tools for automatically bringing existing code under unit tests - if it were that easy, there should be no new bugs at all, right? As arne says in his answer, if code was not designed to be tested in the first place, it usually has to be changed to be testable.
The best you can do in my opinion is to learn some techniques of how to introduce unit tests with relatively few changes (so that you can introduce the unit tests before you start the "real" modifications); one book on this subject I've read recently is Michael Feathers' "Working Effectively with Legacy Code" (Amazon Link: http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052). Although it has some shortcomings, it has pretty detailed descriptions of techniques how you can easily introduce unit tests.

Related

When is Unit Testing done in the software development process?

I am aware of what Unit testing is but When I read about Test Driven Development, I got confused.
Is unit testing something you do ahead of the code-base following the test-first then develop process? or is it something you do mid or after development so you would just refactor some parts of the code-base(which is what I am doing in the project).
I would highly appreciate any enlightenment.
Test Driven Development means, that you write the unit tests before implementing the new feature. With this technique the developer will focus on writing only the code necessary to pass the test. Designs can often be cleaner and clearer than is achieved by other methods.
But compared to normal unit testing, the developer needs a lot more experience to follow this technique. You need an understanding of the implementation including a vision of all layers involved or needed for this requirement, before starting to code (which is always the better way). Otherwise you will have to refactor a lot of code afterwards.
Personally I like this approach, but as I said it requires more experience.
If you are talking about unit testing only, this should always be part of the development process. Often you are writing your tests during development, sometimes when you think you are done. And if you need to refactoring an existing method, then it is also useful to write a test first to ensure you will not break existing functionality.
You should write unit tests as soon as you care that the function is performing properly. If you are doing TDD, that means you write the unit test before you write the unit. Otherwise, that means you write the test very shortly after creating the unit itself.

What is unit testing, and does it require code being written?

I've joined a new team, and I've had a problem understanding how they are doing unit tests. When I asked where the unit tests are written, they explained they don't do their unit tests that way.
They explained that what they're calling unit tests is when they actually check the code they wrote locally, and that all of the points are being connected. To me, this is integration testing and just testing your code locally.
I was under the impression that unit tests are code written to verify behavior in a small section of a code. For example, you may write a unit test to make sure it returns the right value, and make the appropriate calls to the database. use a framework like NUnit or MbUnit to help you out in your assertions.
Unit testing to me is supposed to be fast and quick. To me, you want these so you can automate it, and have a huge suite of tests for your application to make sure that it behaves AS YOU EXPECT.
Can someone provide clarification in my or their misunderstandings?
I have worked places that did testing that way and called it unit testing. It reminded me of a quote attributed to Abe Lincoln:
Lincoln: How many legs does a dog have?
Other Guy: 4.
Lincoln: What if we called the tail a leg?
Other Guy: Well, then it would have 5.
Lincoln: No, the answer is still 4. Calling a tail a leg doesn't make it so.
They explained that what they're calling unit tests is when they
actually check the code they wrote locally, and that all of the points
are being connected.
That is not a unit test. That is a code review. Code reviews are good, but without actual unit tests things will break.
Unit tests involve writing code. Specifically, a unit test operate on one unit, which is just a class or component of your software.
If a class under test depends on another class, and you test both classes together, you have an integration test. Integration tests are good. Depending on the language/framework you might use the same testing framework (e.g. junit for java) for both unit and integration tests. If you have a dependency but mock or stub that dependency, then you have a pure unit test.
Unit testing to me is supposed to be fast and quick. To me, you want
these so you can automate it, and have a huge suite of tests for your
application to make sure that it behaves AS YOU EXPECT.
That is essentially correct. How 'fast and quick' developing unit tests is depends on the complexity of what is being tested and the skill of the developer writing the test. You definitely want to build up a suite of tests over time, so you know when something breaks as a codebase becomes more complex. That is how testing makes your codebase more maintainable, by telling you what ceases to function as you make changes.
Your team-mates are not doing unit testing. They are doing "fly by the seat of your pants" development.
Your assumptions are correct.
Doing a project without unit-tests (as they do, don't be fooled) might seem nice for the first few weeks: less code to write, less architecture to think about, less problems to worry about. And you can see the code is working correctly, right?
But as soon as someone (someone else, or even the original coder) comes back to an existing piece of code to modify it, add feature, or simply understand how it worked and what it exactly did, things will become a lot more problematic. And before you realize it, you'll spend your nights browsing through log files and debugging what seemed like a small feature just because it needs to integrate with other code that nobody knows exactly how it works. ANd you'll hate your job.
If it's not worth testing it (with actual unit-tests), then it's not worth writing the code in the first place. Everyone who tried coding without and with unit tests know that. Please, please, make them change their mind. Every time a piece of untested code is checked in somewhere, a puppy dies horribly.
Also, I should say, it's a lot (A LOT) harder to add tests later to a project that was done without testing in mind, than to build the test and production code side-to-side from the very start. Testing not only help you make sure your code works fine, it improves your code quality by forcing you to make good decisions (i.e. coding on interfaces, loose coupling, inversion of control, etc.)
"Unit testing" != "unit tests".
Writing unit tests is one specific method of performing unit testing. It is a very good one, and if your unit tests are written well, it can give you good value over a long time. But what they're doing is indeed unit testing. It's just the kind of unit testing that doesn't help you at all the next time you need to carve on the same code. And that's wasteful.
To add my two cents, yes, that is indeed not Unit testing. IMHO, the main features of unit tests are that it should be fast, automated and isolated. You can using a mocking framework such as RhinoMocks to isolate external dependencies.
Unit tests also have to be very simple and short. Ideally no more than a screen length. It is also one of the few places in software engineering where copy and pasting code might be a better solution than creating highly reusable and highly abstract functions. The reason simplicity is given such a high priority is to avoid the "Who watches the Watchers" problem. You really don't want to be in a situation where you have complex bugs in your unit tests, because they themselves aren't being tested. Here you are relying on the extreme simplicity and tiny size of the tests to avoid bugs.
The names of the unit tests also should be very descriptive, again following the simplicity and self documenting paradigm. I should be able to read the name of the test method and know exactly what it is doing. A quick glance at the code should show me exactly what functionality is being tested and if any external dependencies are being mocked.
The descriptive test names also make you think about the application as a whole. If I look at the entire test run, ideally just by looking at the names of all the tests that were run, I should have a fairly good idea of what the application does.

How to write unit test and not get bored in development of FOSS project?

I'm developing cross-platform project that would support :
Four C++ compilers - GCC, MSVC, SunStudio, Intel,
Five Operating Systems: Linux, OpenSolaris, FreeBSD, Windows, Mac OS X.
I totally understand that without proper unit testing there is no chance to perform proper QA on all these platforms.
However, as you all know writing unit tests is extremely boring and slow down development process (because it is boring and development of FOSS software shouldn't be such)
How do you manage to write good unit-testing code and not stop writing code.
If you at least get salary for this, you can say - at least I get something for this, but if you don't, this is much harder!
Clarification:
I understand that TDD should be the key, but TDD has following very strict restrictions:
You have exact specifications.
You have fully defined API.
This is true for project that is developed in customer-provider style, but it can't be done for project that evolves.
Sometimes to decide what feature do I need, I have to create something and understand if it works well, if API is suitable and helps me or it is ugly and does not satisfy me.
I see the development process more like evolution, less development according to specifications. Because when I begin implementing some feature, sometimes I do not know if
it would work well and what model would it use.
This is quite different style of development that contradicts TDD.
On the other hand, support of wide range of systems requires unit tests to make sure that
existing code works on various platform and if I want to support new one I only need to
compile the code and run tests.
I suggest to do no unit test at all. Work a bit on the project and see where it leads. If you cannot put enough motivation into doing the obviously right thing then work a bit on your problem do some refactoring, some bug fixing and multiple releases. If you then see what kinds of problems pop up think of TDD as one of the possible tools to solve them.
The problems can be
low quality
high bug fixing costs
reluctance to refactor (i.e. fear to change existing code)
suboptimal APIs (APIs are used to late to change)
high testing costs (i.e. manuall testing)
There is a big difference between theoretically knowing that unit testing and test first are the right approaches and experiencing the pain and learning from that experience. Motivation will come with this experience.
TDD is not a panacea. It can be implemented in a horrible fashion. It should not become a check box in your project check list.
Personally, I don't find testing boring. It's the first time I get to see my code actually run and find out whether it works or not!
Without some form of test program to run the new code directly, I wouldn't get to see it run until after I've built a user interface and wired it all together to make the new bits available through the UI and then, when it doesn't work the first time, I have to try to debug the new code, plus the UI, plus the glue that holds them together and dear god, I don't even know what layer the bug is in, never mind trying to identify the actual offending code. And even that much is assuming I still remember what I was working on before I went off on an excursion into UI-land.
A proper test harness bypasses all that and lets me just call the new code, localize any bugs to the tested section of code so they can be found quickly and fixed easily, see that it produces the right results, get my "it works!" rush, and move on to the next bit of code and my next rush of reward as quickly as possible.
write them jumping from unit tests to code to unit test to code... and so on.
Unit tests should follow all the best practices of production code, such as the DRY principle. If you get bored writing unit tests, you will also get bored writing production code.
Test-Driven Development (TDD) may help you, though, as you constantly switch back and forth between writing a unit test and then a bit of production code.
As others have told you: writing the tests first makes it fun. Your statements that it can't be done for a project that evolves need to be reconsidered. Actually the opposite is true. If you are going the agile route, you are highly discouraged to define everything up front. TDD fits in a paradigm that this is impossible and that change will happen. A book that makes this very clear, and gives examples of this is applying uml and patterns.
Try using TDD (Test Driven Development) - instead of writing your tests after the actual coding was done write them before and let them drive your design.
Due to the nature of the project a fair amount of automation is required - find a way to write the test once for one OS/compiler and then run it for all of the other options available.
Personally, I find writing code that I know works is quite exhilarating. But if you don't want to be bored writing unit tests then you'll need to cultivate a fascination for debugging.
To be serious, if you think that writing unit tests is boring and slow, you really need to re-address how you write them. I suggest you investigate using Test Driven Development. Write the tests in the programming language and run them automatically. Use the feedback from the tests to shape your code.
There are Test First frameworks for pretty much any language you care to mention, inspired by Kent Beck and Erich Gamma's work with JUnit. The Wikipedia article on TDD has more info, including a helpful link to a list of frameworks organized by language. Find out more.

How to start unit testing or TDD?

I read a lot of posts that convinced me I should start writing unit test, I also started to use dependency injection (Unity) for the sake of easier mocking, but I'm still not quite sure on what stage I should start writing the unit tests and mocks, and how or where to start.
Would the preferred way be writing the unit tests before the methods as described in the TDD methodology?
Is there any different methodology or manner for unit testing?
Test first / test after:
It should be noted that 'test first' as part of TDD is just as much (if not more) to do with design as it is to do with unit testing. It's a software development technique in its own right -- writing the tests results in a constant refining of the design.
On a separate note: If there is one significant advantage to TDD from a pure unit testing perspective, it is that it is much harder (though not impossible) to write a test that's wrong when doing TDD. If you write the test beforehand, it should always fail because the logic required to make the test pass does not yet exist. If you write the test afterwards, the logic should be there, but if the test is bugged or is testing the wrong thing, it may pass regardless.
I.e. if you write a bad test before, you may get a green light when you expect a red (so you know the test is bad). If you write a bad test afterwards, you will get a green light when you expected a green (unaware of the bad test).
Books
The pragmatic unit testing book is well worth a look, as is Roy Osherove's "The Art of Unit Testing". The pragmatic book is more narrowly focussed on the different types of test inputs you can try to find bugs, whereas TAOUT covers a wider spread of topics such as test doubles, strategies, maintainability etc. Either book is good; it depends what you want from it.
Also, here's a link to a talk Roy Osherove did on unit testing. It's worth a watch (so are some of the test review videos he recorded, as he points out various problems and dos/don'ts along with reasons why).
How to start
There's nothing better than writing code. Find a fairly simple class that doesn't reference much else. Then, start writing some tests.
Always ask yourself "what do I want to try and prove with this test?" before you write it, then give it a decent name (usually involving the method being called, the scenario and the expected result, e.g. on a stack: "Pop WhenStackIsEmpty ThrowsException").
Think of all the inputs you can throw at it, different combinations of methods that may yield interesting results and so forth.
If you are curious about unit testing the best way to learn it is try it. You will probably start writing integration tests at first, but that is fine. When they seem too difficult to maintain or too much work to write, read more about what kind of tests there are (unit, functional, integration) and try to learn the difference.
If you are interested in starting with TDD, Uncle Bob is a good source. Particalulary this.
More on unit testing
Ensure that you get consistent test results. Running the same test repeatedly should return the same results consistently.
The tests should not require configuration.
Testing order should not matter. This means partial test runs can work correctly. Also, if you keep this design philosophy in mind, it will likely aid in your test creation.
Remember that any testing is better than no testing. The difficulty can be found in writing good clean unit tests that promote ease of creation and maintenance. The more difficult the testing framework, the more opposition there will be to employing it. Testing is your friend.
In C# and with visual studio I find following procedure very helpful:
Think! Make a small upfront design. You need to have a clear picture what classes you need and how objects should relate with each other. Concentrate only on one class/object (the class/object you want to implement) and one relationship. Otherwise you end up with a too heavyweight design. I often end up with multiple sketches (only a few boxes and arrows) on a spare sheet of paper.
Create the class in the production code and name it appropriately.
Pick one behaviour of the class you want to implement and create a method stub for it. With visual studio creating empty method stubs is a piece of cake.
Write a test for it. Therefor you will need to initialize that object, call the method and make an assert to verify the result. In the easiest case the assertion requires another method stub or a property in the production code.
Compile and let the test runner show you the red bar!
Code the required behavior in the production code to see the green bar appear.
Move to the next behaviour.
For this procedure two things are very important:
You need a clear picture what you want and how the class/object should look like. At least spend some time one it.
Think about behaviours of the class/object. This will make the tests and the production code behaviour-centric, which is a very natural approach to think about classes/objects.
Test first or not test first?
I find it usually harder to retrofitting tests to existing production code. In most cases this is due to tangled dependencies to other objects, when the object which is under test needs to be initialized. TDD usually avoids this, because you want to initialize the object in the test cases with less effort as possible. This will result to a very loose coupling.
When I retrofit tests, the must cumbersome job is the task of initializing the object under test. Also the assertions may be a lot of work because of tangled dependencies. For this you will need to change the production code and break dependencies. By using dependency injection properly this should not be an issue.
Yes, the preferred way of doing TDD is to write the test first (as implied by the name Test-Driven Development). When you start out with TDD it can be hard to know where to start writing the tests, so I would suggest to be pragmatic about it. After all, the most important part of our job is about delivering working code, not so much how the code was crafted.
So you can start by writing tests for existing code. Once you get a hang of how the unit tests are structured, which ones that seem to do a good job and which ones that seem not so god, then you will find it easier to dive more into the test-first approach. I have found that I write tests first to a greater extent as time goes by. It simply becomes more natural with increased experience.
Steve Sanderson has a great writeup on TDD best practices.
http://feeds.codeville.net/~r/SteveCodeville/~3/DWmOM3O0M2s/
Also, there's a great set of tutorials for doing an ASP.net mvc project that discusses a lot TDD principles (if you don't mind learning ASP.net MVC along the way)
http://www.asp.net/learn/mvc-videos/ Look for the "Storefront" series at the bottom of the page.
MOQ seems to be the hot mocking framework lately, you may want to look into that as well
In summary, try to write a test to validate something you'r trying to archive, then implement the code to make it work.
The pattern is known as Red - Green - Refactor. Also do your best to minimize dependencies so that your tests can focus on one component.
Personally, I use Visual Studio Unit Tests. I'm not a hardcore TDD developer, but what i like to do is this:
Create a new project and define a few of the fundamental classes based on the system design (that way I can at least get some intellisense)
create a unit tests project and start writing unit tests to satisfy the functionality i'm trying to implement.
Make them fail
Make them pass (implement)
Refactor
Repeat, try to make the test more stringent or create more tests until i feel its solid.
I also feel its very useful to add functionality onto an exiting code base. If you want to add some new feature, first create the unit test for what you want to add, step through the code to see what you have to change, then go through the TDD process.
Choose a small non-critical application and implement it using TDD. At first the new way of thinking will feel weird, but maybe after a week or two practice it fill feel natural.
Here is a tutorial application (in branch "tutorial") that shows what kinds of tests to write. In that tutorial you write code to pass the predefined test cases, so that you get into the rhythm, and later you then write your own tests. The README file contains instructions. It's written in Java, but you can easily adapt it to C#.
I would take on TDD, test-first development, before mocks and dependency injection. To be sure, mocks can help you better isolate your units - and thus do better unit testing - but to my mind, mocking and DI are more advanced concepts that can interfere with the clarity of just writing your tests first.
Mocks, and DI, have their place; they're good tools to have in your toolbox. But they take some sophistication, a more advanced understanding, than the typical testing neophyte has. Writing your tests first, however, is exactly as simple as it sounds. So it's easier to take on, and it's powerful all by itself (without mocks and DI). You'll get earlier, easier wins by writing mock-free tests first, than by trying to begin with mocks, and TDD, and DI all at once.
Start with test-first; when you are very comfortable with it, and when your code is telling you you need mocks, then take on mocks.
I have worked for companies which take unit testing/integration testing too far and those that do too little so I like to think I have a good balance between the two.
I would recommend TDD - Test Driven Development. This ensures you have good coverage but it also keeps focusing your attention on the right place and problem.
So the first thing you do for every piece of new development is write a unit test - even if you don't have a single class to test.
Think about what you are testing. Now run the test. Why wouldn't it compile? Because you need classA. Create the class and run the test. Why doesn't it compile? Because it doesn't have methodA. Write method one and run unit test again. Why does the test fail? Because methodA isn't implemented. Implement methodA and run test. Why does it fail? Because methodA doesn't return the correct expected value...etc
You continue like this writing unit tests as you develop and then eventually the test will pass and the piece of functionality will be complete.
Extending on Steve Freeman's answer: Dave Astel's book is called "Test-driven Development - A practical guide". If the kind of application you're writing is a GUI application then this should be helpful. I read Kent Becks' book but I couldn't figure out how to start a project with TDD. Astel's book test-drives a complete non-trivial GUI application from start to finish using stories. It helped me a lot to acutally start with TDD, it showed me where and how to start.
Test driven development can be confusing for beginners, a lot of books that I read when I was learning TDD would teach you how to write Unit Tests for a Calculator class but there seems to be very less help for building real world apps, that are more data centric if I dare say. For me the breakthrough was when I understood what is Behaviour Driven Development or BDD and how to start doing testing from outside in. Now I can simply advice you to focus on your application behaviour and write unit tests to verify it. There is a lot of debate going on between TDD and BDD but I think that well written automated tests at every level add value and to write them we need to focus on behaviour.
Hadi Hariri has an excellent post here
http://hadihariri.com/2012/04/11/what-bdd-has-taught-me/
I have also written some articles on the topic that I feel will help in understanding all the concepts related to TDD here
http://codecooked.com/introduction-to-unit-testing-and-test-driven-development/
http://codecooked.com/different-types-of-tests-and-which-ones-to-use/
Read Pragmatic Unit Testing in C# with NUnit. It has comprehensive information about starting writing testes and structuring the code to make it more unit testing friendly.
If you haven't written unit tests before, then just pick some classes and begin to write your unit tests, and continue to work on developing more unit tests.
As you gain experience you can then begin to mock out the database for example, by using the Unity framework, but, I would suggest starting simply and gaining experience before making this leap.
Once you are comfortable with how to write unit tests, then you can try doing TDD.
I prefer KentBeck's approach which is nicely explained in the book, Test Driven Development by Example - Kent Beck.
from your question i can infer you are not sure with the test frame work - choosing the correct test frame work is very important for TDD or writing unit tests(in general).
Only practical problem with TDD is "Refactoring"(we need to refactor test code as well) takes lot of time.
I think Dave Astels' book is still one of the best introductions. It's for Java, but you should be able to translate.

Starting UnitTesting on a LARGE project

Can anyone recommend some best-practices on how to tackle the problem of starting to UnitTest a large existing CodeBase?
The problems that I'm currently facing include:
HUGE code base
ZERO existing UnitTests
High coupling between classes
Complex OM (not much I can do here - it's a complex business domain)
Lack of experience in writing UnitTests/TDD
Database dependency
External sources dependencies (Web services, WCF services, NetBIOS, etc)
Obviously, I understand that I should start with refactoring the code, to make it less coupled, and more testable. However, doing such refactoring is risky without UnitTests (Chicken and Egg, anyone?).
On a side note, would you recommend starting the refactoring and writing test on the Domain classes, or on tier classes (logging, utilities, etc)?
First, I second the "Working Effectively with Legacy Code" recommendation Jeffrey Frederick gave.
As you stated in the question, you cannot change your code because you currently have no unit tests available to detect regressions, and you cannot add unit tests because your codebase is not unit-testable. In this situation, you can create characterization tests : end-to-end automatic tests that would help you detecting changes in the external behavior of your software. Once those are in place, you can start to slowly modify the code.
However, putting your HUGE code base under test is an enormous effort, highly risked, and you'll probably burn out the team doing this as they will have to produce strong efforts with low reward in terms of test coverage. Putting the entire code base under test is an endless effort.
Instead, develop new capability out of the code base, so that it won't hinder you. Once the new code is fully tested, integrate it in the code base.
Also try to create unit-tests every single time you fix a problem in the code base. It will be hard the first times, but it will get easier once you'll have some unit testing environment ready to be setup.
Lack of experience in writing
UnitTests/TDD
I think this is the most significant.
Standard advice is to start writing unit tests for all your new code first so that you learn how to unit test before you attempt to unit test your existing code. I think this is good advice in theory but hard to follow in practice, since most new work is modifications of existing systems.
I think you'd be well served by having access to a "player coach", someone who could work on the project with your team and teach the skills in the process of applying them.
And I think I'm legally obligated to tell you to get Michael Feather's Working Effectively with Legacy Code.
There good advices in book Manage it! by Johanna Rothman.
I could also recoment the following:
Use unit test on newly created source code fregment
Create unit test for bugs
Create unit test for the riskiest part of the application
Create unit test for the most valuable part of the application.
If the coupling is too high create test which are more module or integration tests but automated.
One single unit test will not help. But it is needed to start. After while there will be handfull of unit test on the most riskiest part of the application and that will help preventing bugs in those segments. When you get to this point most of the developers will realize how usefull unit tests are.
You won't get this thing right on your own.
There's a lot of persuasion needed. So I want to give you this thread, where lots of information and hints for good argumentation are given: How do you persuade others to write unit-tests?
Only the whole team can create such a big size of unit-tests.
Gross. I've had to deal with this too. The best thing to do, I think, is to lean heavily on yucky tools that you probably don't want to use (like NUnitAsp) at first, to get the existing code under tests. Then you can start refactoring, while those tools keep your codebase from falling apart from under you.
Then, as you refactor, write more logical unit tests on the new, testable pieces that you're creating and gradually retire the original tests.
Good luck with this...
Since you have little experience with writing unit tests I recommend that you first try to gain at least some experience. Otherwise, it's likely that you'll abondon TDD/unit testing and maybe introduce new bugs into the system you're trying to unit test.
The best way to gain experience is to find someone experienced who can assist you.