How to gracefully integrate unit testing where none is present? [closed] - unit-testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been tasked with developing a document for internal testing standards and procedures in our company. I've been doing plenty of research and found some good articles, but I always like to reach out to the community for input on here.
That being said, my question is this: How do you take a company that has a very large legacy code base that is barely testable, if at all testable, and try to test what you can efficiently? Do you have any tips on how to create some useful automated test cases for tightly coupled code?
All of our new code is being written to be as loosely coupled as possible, and we're all pretty proud of the direction we're going with new development. For the record, we're a Microsoft shop transitioning from VB to C# ASP.NET development.

There are actually two aspects to this question: technical, and political.
The technical approach is quite well defined in Michael Feathers' book Working Effectively With Legacy Code. Since you can't test the whole blob of code at once, you hack it apart along imaginary non-architectural "seams". These would be logical chokepoints in the code, where a block of functionality seems like it is somewhat isolated from the rest of the code base. This isn't necessarily the "best" architectural place to split it, it's all about selecting an isolated block of logic that can be tested on its own. Split it into two modules at this point: the bulk of the code, and your isolated functions. Now, add automated testing at that point to exercise the isolated functions. This will prove that any changes you make to the logic won't have adverse effects on the bulk of the code.
Now you can go to town and refactor the isolated logic following the SOLID OO design principles, the DRY principle, etc. Martin Fowler's Refactoring book is an excellent reference here. As you refactor, add unit tests to the newly refactored classes and methods. Try to stay "behind the line" you drew with the split you created; this will help prevent compatibility issues.
What you want to end up with is a well-structured set of fully unit tested logic that follows best OO design; this will attach to a temporary compatibility layer that hooks it up to the seam you cut earlier. Repeat this process for other isolated sections of logic. Then, you should be able to start joining them, and discarding the temporary layers. Finally, you'll end up with a beautiful codebase.
Note in advance that this will take a long, long time. And thus enters the politics. Even if you convince your manager that improving the code base will enable you to make changes better/cheaper/faster, that viewpoint probably will not be shared by the executives above them. What the executives see is that time spent refactoring code is time not spent on adding requested features. And they're not wrong: what you and I may consider to be necessary maintenance is not where they want to spend their limited budgets. In their minds, today's code works just fine even if it's expensive to maintain. In other words, they're thinking "if it ain't broke, don't fix it."
You'll need to present to them a plan to get to a refactored code base. This will include the approach, the steps involved, the big chunks of work you see, and an estimated time line. Its also good to present alternatives here: would you be better served by a full rewrite? Should you change languages? Should you move it to a service oriented architecture? Should you move it into the cloud, and sell it as a hosted service? All these are questions they should be considering at the top, even if they aren't thinking about them today.
If you do finally get them to agree, waste no time in upgrading your tools and setting up a modern development chain that includes practices such as peer code reviews and automated unit test execution, packaging, and deployment to QA.
Having personally barked up this tree for 11 years, I can only assure you it's incredibly not easy. It requires a change all the way at the top of the tech ladder in your organization: CIO, CTO, SVP of Development, or whoever. You also have to convince your technical peers: you may have people who have a long history with the old product and who don't really want to change it. They may even see your complaining about its current state as a personal attack on their skills as a coder, and may look to sabotage or sandbag your efforts.
I sincerely wish you nothing but good luck on your venture!

Related

Is it really reasonable to write tests at the early stage? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I used to write the tests while developing my software, but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress. I would need to rewrite the entire main program and the entire test every time.
I believe this situation is common in reality. So my questions are:
Is it really common to write tests at first, like so said in TDD? I'm just an amateur programmer so I don't know the real development world.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure? (unless they're smart enough to think up the best one at first, unlike me.)
I don't know of anyone who recommends TDD when you don't know what you're building yet. Unless you've created a very similar system before, then you prototype first, without TDD. There is a very real danger, however, of ending up putting the prototype into production without ever bringing the TDD process into play.
Some common ways of doin' it right are…
A. Throw the prototype away, and start over using TDD (can still borrow some code almost verbatim from the prototype, just re-implement following the actual TDD cycle).
B. Retrofit unit tests into the prototype, and then proceed with red, green, refactor from there.
but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress
Test driven development should help you with the design. An API that is "clumsy" will seam clumsy as you write your tests for it.
Is it really common to write tests at first, like so said in TDD?
Depends on the developers. I use Test driven development for 99% of what I write. It aids in the design of the APIs and applications I write.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure?
Depends on the level of the tests. Hopefully during a big refactor (that is when you rewrite a chunk of code) you have some tests at the to cover the work you are about to do. Some unit tests will be thrown away but integration and functional tests will be very important. They are what tells you that nothing has been broken.
You may have noticed I've made a point of writing test driven development and not "TDD". Test driven development is not simply "writing tests first", it is allowing the tests to drive the development cycle. The design of your API will be strongly effected by the tests that you write (contrived example, that singleton or service locator will be replaced with IoC). Writing good APIs takes practice and learning to listen to the tools you have at your disposal.
Purists say yes but in practice it works out a little different. Sometimes I write a half dozen tests and then write the code that passes them. Other times I will write several functions before writing the tests because those functions are not to be used in isolation or testing will be hard.
And yes, you may find you need to rewrite tests as API change.
And to the purists, even they will admit that some tests are better than none.
Is it really reasonable to write tests at the early stage?
No if you are writing top-down-design high level integrationtests that require a real database or internetconnection to an other website to work
yes if you are implementing bottom-up with unittesting (=testing a module in isolation)
The higher the "level" the more difficuilt the unittesting becomes because you have to introduce more mocking/abstraction.
In my opinion the architectual benefits of tdd only apply when combined with unittesting because this drives the Separation_of_concerns
When i started tdd i had to rewrite many tests when changing the api/architecture. With grown experience today there are only a few cases where this is neccessary.
You should have a first layer of tests that verifies the externally visible behavior of your API regardless of its internals.
Updating this kind of tests when a new functional requirement emerges is not a problem. In the example you mention, it would be easy to adjust to new websites being scraped - you would just add new assertions to the tests to account for the new data fetched.
The fact that "the scraping code had to be revamped entirely" shouldn't affect the structure of these higher level tests, because from the outside, the API should be consumed exactly the same way as before.
If such a low-level technical detail does affect your high level tests, you're probably missing an abstraction that describes what data you get but hides the details of how it is retrieved.
Writing tests before you write the actual code would mean you know how your application will be designed. This is rarely the case.
As a matter of fact I for example start writing everything in a single file. It might have a few hundereds or more lines. This way I can easily and quickly redesign the api. Later when I decide I like it and that it's good I start refactoring it by putting everything in meaningfull namespaces and separate files.
When this is done I start writing tests to verify everything works fine and to find bugs.
TDD is just a myth. It is not possible to write tests first and the code later especially if you are at the beginning.
You always have to keep in mind the KISS rule. If you need some crazy stuff to test you own code like fakes or mocks you already failed it.

How to write good software without getting stuck [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've been working for years on my personal project, an operating system made from scratch. As you may imagine it's quite complicated stuff. The problem is that I've been working this from scratch many times. Meaning that at some point (quite advanced too, I had hard disk read/write and some basic networking), things were too confused and I decided to throw it all by the window and try again.
In the years I've learnt how to make the code look nicer, I read "Clean Code - A Handbook of Agile Software Craftsmanship" by Robert Martin and it helped a lot. I learnt to make functions smaller, organize things in classes (I used C, now C++) and namespaces, appropriate error handling with exceptions and testing.
This new approach however got me stuck at the point that I spend most of the time to check that everything is going well, that the code reads well, that is easy, well commented and tested. Basically I'm not making any relevant step from months. When I see my well-written code, it's difficult to add a new functionality and think "where should I put this? Have I already used this piece of code? What would be the best way to do this?" and too often I postpone the work.
So, here's the problem. Do you know any code writing strategy that makes you write working, tested, nice code without spending 90% of time at thinking how to make it working, tested and nice?
Thanks in advance.
Do you know any code writing strategy that makes you write working, tested, nice code without spending 90% of time at thinking how to make it working, tested and nice?
Yes, here.
Seriously, no. It is not possible to write good code without thinking.
When I see my well-written code, it's difficult to add a new functionality and think "where should I put this? Have I already used this piece of code? What would be the best way to do this?" and too often I postpone the work.
This is called "analysis paralysis". You might be interested in reading the "Good Enough Software" section of The Pragmatic Programmer. Your code doesn't have to be perfect.
Those things are widely discussed. To me this legendary blog entry be Joel Spolsky and the follow up discussion (Robert Martin answered this) everywhere an the web contains all the pro and cons and is still fun to read.
To get an idea here's a quote by Jamie Zawinski which appears in the post linked to above:
“At the end of the day, ship the fu****g thing! It’s great to rewrite your code and make it cleaner and by the third time it’ll actually be pretty. But that’s not the point—you’re not here to write code; you’re here to ship products.”
I suggest you give TDD (test driven development) a run.
In this context, you will write automated tests for each piece of functionality before implementing it, then you run the tests after completing the feature.
If the tests pass, then you are done, and can start another feature. As a bonus, the tests will collect over time, and you will soon have a test suite you can use for regression testing (to make sure you haven't broke anything while new coding); this addresses your fear of breaking things in the "nice code".
Also, TDD will let you focus on developing exactly what you need, not more, so it tends to lead to nicer and simpler design (especially in interfaces, since you have to think about interfaces before you start coding, so "thought" drives the interfaces, rather than "whatever happens to be handier when I'm coding it".)
However, be aware that applying automated tests to an OS may provide some amount of technical challenge!

How do you decide what to test in your test suites?

I'm an intern working on a project that has the potential to introduce a lot of bugs at a company with an extremely large code base. Currently the company has no automated testing implemented for any of their projects, so I want to begin writing tests for the code as I go so that I can tell when I break something, but I have a hard time developing an intuition for what is worth testing and how to test it. Some things are more obvious than others: testing string manipulation functions isn't too tough, but what to write for a multithreaded custom memory manager is trickier.
How do you go about designing tests for an existing codebase and what do you test for? How do you figure out what underlying assumptions the code is making?
Answer to most of your questions
http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
No easy answers for you I'm afraid. That is just a tough spot to be in.
The method to be applied is
identify regions that deliver the most bang-for-test-buck. (This is something that you have to come up with - unique to your situation).
Spend time getting to know the region. Identify interactions of this region with the rest of the code base.
Document the same using tests - these act as a regression "vice" that will hold your software in place while you make subsequent changes
Now you've a safety net to work above. You can now start making your enhancements/fixes/changes using a TDD approach.
The idea is slowly islands of the codebase will emerge above the safety net till you reach a point of diminishing returns. Michael feather's WELC book as posted by Pangea above is a must-read if you're venturing into this area.
A similar question has been asked and answered here
Some quick thoughts from me:
At the beginning, add tests for new
written code, either in new project
or for changes on a existing project.
Don't touch code that is running and is not changed.
concentrate on functionality that is
often used or that is critical.
The subject is really manifold and maybe you should try to get a training to get an overview. Assuming you are in the US, you can have a closer look here. Here is their course content.
They have also a long list of useful resources.

How do you maintain discipline when doing TDD? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
When I get excited about a new feature I'm just about to implement or about a bug that I've just "understood", there is the urge to just jump into the code and get hacking. It takes some effort to stop myself from doing that and to write the corresponding test first. Later the test often turns out to be trivial 4-liner, but before writing it still there's the thought in back of a head, "maybe I can skip this one, this one time?" Ideally I'd like to get an urge to write test, and only then, perhaps, the code :)
What method (or way of thinking or mind trick or self-reward policy or whatever) do you use to help maintain the discipline? Or do you just practice it until it feels natural?
I like the instant feedback from the test, that's reward enough for me. If I can reproduce a bug in a test that's a good feeling, I know I'm headed in the right direction as opposed to guessing and possibly wasting my time.
I like working Test-First because I feel like it keeps me more in tune with what the code is actually doing as opposed to guessing based on a possibly inaccurate mental model. Being able to confirm my assumptions iteratively is a big payoff for me.
I find that writing tests helps me to sketch out my approach to the problem at hand. Often, if you can't write a good test, it means you haven't necessarily thought enough about what it is that you're supposed to be doing. The satisfaction of being confident that I know how to tackle the problem once the tests are written is rather useful.
I'll let you know when I find a method that works. :-)
But seriously, I think your "practice until it feels natural" comment pretty much hits the nail on the head. A 4 line test may appear trivial, but as long as what you are testing represents a real failure point then it is worth doing.
One thing I have found to be helpful is to include code coverage validation as part of the build process. If I fail to write tests, the build will complain at me. If I continue failing to write tests, the continuous integration build will "error out" and everyone nearby will hear the sound I have wired to the "broken build" notification. After a few weeks of "Good grief... You broke it again?", and similar comments, I soon started writing more tests to avoid embarrassment.
One other thing (which only occurred to me after I had submitted the answer the first time) is that once I really got into the habit of writing tests first, I got great positive reinforcement from the fact that I could deliver bug-fixes and additional features with much greater confidence than I could in my pre-automated-test days.
Easiest way I've found is to just use TDD a lot. At some point, writing code without unit tests becomes a very, very nervous activity.
Also, try to focus on interaction or behavioral testing rather than state-based testing.
wear a green wristband
1) You pair with somebody else in your team. One person writes the test, the other implements.
It's called "ping-pong" pairing.
Doing this will force you to discuss design and work out what to do.
Having this discussion also makes it easier to see what tests you're going to need.
2) When I'm working on my own, I like to try out chunks of code interactively. I just type them in at the ruby prompt. When I'm experimenting like this I often need to set up some data for experimenting with, and some printout statements to see what the result is.
These little, self-contained throwaway experiments are usually:
a quick way to establish the feasibility of an implementation, and
good place to start formalising into a test.
I think the important part of keeping yourself in check as far as TDD is concerned is to have the test project set up properly. That way adding a trivial test case is indeed trivial.
If to add a test you need to first create a test project, then work out how isolate components, when to mock things, etc, etc it gees into too hard basket.
So I guess it comes back to having unit tests fully integrated into your development process.
When I first started doing TDD around 2000, it felt very unnatural. Then came the first version of .net and the JUnit port of NUnit, and I started practice TDD at the Shu level (of Shu-Ha-Ri), which meant test (first) everything, with the same questions as yours.
A few years later, at another workplace, together with a very dedicated, competent senior developer, we took the steps necessary to reach the Ha level. This meant for example, not blindly starring at the coverage report, but question "is this kind of test really useful, and does it add more value than it costs?".
Now, at another workplace, together with yet another great colleague, I feel that we're taking our first steps towards the Ri level. For us that currently means a great focus on BDD/executable stories. With those in place verifying the requirements at a higher level, I feel more productive, since I don't need to (re-)write a bunch of unit tests each time a class' public interface needs to change, replace a static call with an extension method, and so on.
Don't get me wrong, the usual TDD class tests still is used and provides great value for us. It's hard to put in words, but we're just so much better at "feeling" and "sensing" what tests makes sense, and how to design our software, than I was capable of ten years ago.

YAGNI - The Agile practice that must not be named? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As I've increasingly absorbed Agile thinking into the way I work, yagni ("you aren't going to need it") seems to become more and more important. It seems to me to be one of the most effective rules for filtering out misguided priorities and deciding what not to work on next.
Yet yagni seems to be a concept that is barely whispered about here at SO. I ran the obligatory search, and it only shows up in one question title - and then in a secondary role.
Why is this? Am I overestimating its importance?
Disclaimer. To preempt the responses I'm sure I'll get in objection, let me emphasize that yagni is the opposite of quick-and-dirty. It encourages you to focus your precious time and effort on getting the parts you DO need right.
Here are some off-the-top ongoing questions one might ask.
Are my Unit Tests selected based on user requirements, or framework structure?
Am I installing (and testing and maintaining) Unit Tests that are only there because they fall out of the framework?
How much of the code generated by my framework have I never looked at (but still might bite me one day, even though yagni)?
How much time am I spending working on my tools rather than the user's problem?
When pair-programming, the observer's role value often lies in "yagni".
Do you use a CRUD tool? Does it allow (nay, encourage) you to use it as an _RU_ tool, or a C__D tool, or are you creating four pieces of code (plus four unit tests) when you only need one or two?
TDD has subsumed YAGNI in a way. If you do TDD properly, that is, only write those tests that result in required functionality, then develop the simplest code to pass the test, then you are following the YAGNI principle by default. In my experience, it is only when I get outside the TDD box and start writing code before tests, tests for things that I don't really need, or code that is more than the simplest possible way to pass the test that I violate YAGNI.
In my experience the latter is my most common faux pas when doing TDD -- I tend to jump ahead and start writing code to pass the next test. That often results in me compromising the remaining tests by having a preconceived idea based on my code rather than the requirements of what needs to be tested.
YMMV.
Yagni and KISS (keep it simple, stupid) are essentially the same principle. Unfortunately, I see KISS mentioned about as often as I see "yagni".
In my part of the wilderness, the most common cause of project delays and failures is poor execution of unnecessary components, so I agree with your basic sentiment.
The freedom to change drives YAGNI. In a waterfall project, the mantra is control scope. Scope is controlled by establishing a contract with the customer. Consequently, the customer stuffs all they can think of in the scope document knowing that changes to scope will be difficult once the contract has been signed. As a result, you end up with applications that has a laundry list of features, not a set of features that have value.
With an agile project, the product owner builds a prioritized product backlog. The development team builds features based on priority i.e., value. As a result, the most important stuff get built first. You end up with an application that has features that are valued by the users. The stuff that is not important falls off the list or doesn't get done. That is YAGNI.
While YAGNI is not a practice, it is a result of the prioritized backlog list. The business partner values the flexibility afforded the business given that they can change and reprioritized the product backlog from iteration to iteration. It is enough to explain that YAGNI is the benefit gained when we readily accept change, even late in the process.
The problem I find is that people tend to bucket even writing factories, using DI containers (unless you've already have that in your codebase) under YAGNI. I agree with JB King there. For many people I've worked with YAGNI seems to be the license to cut corners / to write sloppy code.
For example, I was writing a PinPad API for abstracting multiple models/manufacturers' PINPad. I found unless I've the overall structure, I can't write even my Unit Tests. May be I'm not a very seasoned practioner of TDD. I'm sure there'll be differing opinions on whether what I did is YAGNI or not.
I have seen a lot of posts on SO referencing premature optimization which is a form of yagni, or at least ydniy (you don't need it yet).
I don't see YAGNI as the opposite of quick-and-dirty, really. It is doing just what is needed and no more and not planning like the software someone writes has to last 50 years. It may come rarely because there aren't really that many questions to ask around it, at least to my mind. Similar to the "don't repeat yourself" and "keep it simple, stupid" rules that become common but aren't necessarily dissected and analyzed in 101 ways. Some things are simple enough that it is usually gotten soon after doing a little practice. Some things get developed behind the scenes and if you turn around and look you may notice them may be another way to state things.