Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
For any software project testing is important. I am new to testing so how can I test a developed software project? . What are the steps and levels of testing? I wanted to know how the software projects will be tested in companies?
wikipedia has a good article on Software Testing, and will be better than anything I write here. However, I'll try to describe the process at the higest level:
At the highest level you have perhaps three types of tests:
unit tests - tests for individual units (eg: functions, methods). If you give function A the inputs x, y and z, does it return the right value? These are cheap, easy and fast, and help you to understand that individual units of code work precisely as they are designed to work.
system tests - do the individual units work together? This is where you test the business logic of your application, and to test the contract between units ("if you provide the arguments x,y,z I'll return A" and "if you give me the wrong arguments I'll raise error B"). These help you to understand whether the individual units work together to accomplish a task.
performance tests. Performance in this context could mean raw speed, or it could mean capacity ("can the website handle 1 million hits per day?"), system load, latency, etc.
unit tests are most often done using a framework called junit (for java), nunit (j.net), or something similar. There's probably an *unit framework for whatever language you are using. Many software shops use their own custom tool to do this. Most often (but not exclusively), unit tests are written by the developer that wrote the unit.
system tests can take many forms, and there's not always a single solution that will work for a particular application. For example, if your site is a web site you can have service layer tests ("if I call the web API of my site, does it return the right value") and presentation layer ("if I click on the button in the UI, does the form get posted to the proper URL?) tests.
While unit tests are almost always automated, system level tests can be automated, manual, or a combination of automated and manual. User interface testing is often a manual process. While there are tools to drive the UI for a variety of types of applications, ultimately it's a very difficult problem to automatically answer questions like "does this look right?" and "Is this easy to use?". Those types of questions almost always have to be answered by a human trained to answer such questions.
Performance tests are almost exclusively automated, though any easy way to do performance testing is simply to time your automated system tests and watch the trends, and also watch system metrics such as CPU and memory utilization while your system tests are running. This isn't an ideal performance testing strategy, but it's good enough if you're just starting out.
So, to get started with testing, see if there is a unit testing framework already available for your language. You can then quickly come up with a body of tests for the individual units. You can then start looking for what are commonly called "testing frameworks" for the system tests. There are many, many frameworks to choose from. There is no "best", so don't get too caught up in finding the perfect tool. Pick any tool that works for your language and start using it.
Fundamentally there are these things going on:
Understand what the software is supposed to do.
Decide how to verify that it does.
Agree your test strategy with the stake-holders: there should be people who care about whether you are testing the right things, they need to have confidence that you are doing so.
Perform the verification
Report the results, accurately, in enough detail to allow problems to be fixed
The details depend upon the nature of the software. For example what would you do if the software didn't have a UI? Or if it has a UI, there are almost certain to be other things (eg. modules which load data from external systems) you need to test too, what proportion of time will you spend on those?
There's a strong likelihood that some parts of the testing you decide is appropriate will need to be repeated as new releases of the software are made. You can make a distinction between "testing" and the subset which is "re-checking" and there may be value in automating the re-checking aspects.
One thing to bear in mind: I'm very suspicious of an attempt to reduce testing to a simple set of "steps". You might look at contextual testing for an explanation.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been tasked with developing a document for internal testing standards and procedures in our company. I've been doing plenty of research and found some good articles, but I always like to reach out to the community for input on here.
That being said, my question is this: How do you take a company that has a very large legacy code base that is barely testable, if at all testable, and try to test what you can efficiently? Do you have any tips on how to create some useful automated test cases for tightly coupled code?
All of our new code is being written to be as loosely coupled as possible, and we're all pretty proud of the direction we're going with new development. For the record, we're a Microsoft shop transitioning from VB to C# ASP.NET development.
There are actually two aspects to this question: technical, and political.
The technical approach is quite well defined in Michael Feathers' book Working Effectively With Legacy Code. Since you can't test the whole blob of code at once, you hack it apart along imaginary non-architectural "seams". These would be logical chokepoints in the code, where a block of functionality seems like it is somewhat isolated from the rest of the code base. This isn't necessarily the "best" architectural place to split it, it's all about selecting an isolated block of logic that can be tested on its own. Split it into two modules at this point: the bulk of the code, and your isolated functions. Now, add automated testing at that point to exercise the isolated functions. This will prove that any changes you make to the logic won't have adverse effects on the bulk of the code.
Now you can go to town and refactor the isolated logic following the SOLID OO design principles, the DRY principle, etc. Martin Fowler's Refactoring book is an excellent reference here. As you refactor, add unit tests to the newly refactored classes and methods. Try to stay "behind the line" you drew with the split you created; this will help prevent compatibility issues.
What you want to end up with is a well-structured set of fully unit tested logic that follows best OO design; this will attach to a temporary compatibility layer that hooks it up to the seam you cut earlier. Repeat this process for other isolated sections of logic. Then, you should be able to start joining them, and discarding the temporary layers. Finally, you'll end up with a beautiful codebase.
Note in advance that this will take a long, long time. And thus enters the politics. Even if you convince your manager that improving the code base will enable you to make changes better/cheaper/faster, that viewpoint probably will not be shared by the executives above them. What the executives see is that time spent refactoring code is time not spent on adding requested features. And they're not wrong: what you and I may consider to be necessary maintenance is not where they want to spend their limited budgets. In their minds, today's code works just fine even if it's expensive to maintain. In other words, they're thinking "if it ain't broke, don't fix it."
You'll need to present to them a plan to get to a refactored code base. This will include the approach, the steps involved, the big chunks of work you see, and an estimated time line. Its also good to present alternatives here: would you be better served by a full rewrite? Should you change languages? Should you move it to a service oriented architecture? Should you move it into the cloud, and sell it as a hosted service? All these are questions they should be considering at the top, even if they aren't thinking about them today.
If you do finally get them to agree, waste no time in upgrading your tools and setting up a modern development chain that includes practices such as peer code reviews and automated unit test execution, packaging, and deployment to QA.
Having personally barked up this tree for 11 years, I can only assure you it's incredibly not easy. It requires a change all the way at the top of the tech ladder in your organization: CIO, CTO, SVP of Development, or whoever. You also have to convince your technical peers: you may have people who have a long history with the old product and who don't really want to change it. They may even see your complaining about its current state as a personal attack on their skills as a coder, and may look to sabotage or sandbag your efforts.
I sincerely wish you nothing but good luck on your venture!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I used to write the tests while developing my software, but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress. I would need to rewrite the entire main program and the entire test every time.
I believe this situation is common in reality. So my questions are:
Is it really common to write tests at first, like so said in TDD? I'm just an amateur programmer so I don't know the real development world.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure? (unless they're smart enough to think up the best one at first, unlike me.)
I don't know of anyone who recommends TDD when you don't know what you're building yet. Unless you've created a very similar system before, then you prototype first, without TDD. There is a very real danger, however, of ending up putting the prototype into production without ever bringing the TDD process into play.
Some common ways of doin' it right are…
A. Throw the prototype away, and start over using TDD (can still borrow some code almost verbatim from the prototype, just re-implement following the actual TDD cycle).
B. Retrofit unit tests into the prototype, and then proceed with red, green, refactor from there.
but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress
Test driven development should help you with the design. An API that is "clumsy" will seam clumsy as you write your tests for it.
Is it really common to write tests at first, like so said in TDD?
Depends on the developers. I use Test driven development for 99% of what I write. It aids in the design of the APIs and applications I write.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure?
Depends on the level of the tests. Hopefully during a big refactor (that is when you rewrite a chunk of code) you have some tests at the to cover the work you are about to do. Some unit tests will be thrown away but integration and functional tests will be very important. They are what tells you that nothing has been broken.
You may have noticed I've made a point of writing test driven development and not "TDD". Test driven development is not simply "writing tests first", it is allowing the tests to drive the development cycle. The design of your API will be strongly effected by the tests that you write (contrived example, that singleton or service locator will be replaced with IoC). Writing good APIs takes practice and learning to listen to the tools you have at your disposal.
Purists say yes but in practice it works out a little different. Sometimes I write a half dozen tests and then write the code that passes them. Other times I will write several functions before writing the tests because those functions are not to be used in isolation or testing will be hard.
And yes, you may find you need to rewrite tests as API change.
And to the purists, even they will admit that some tests are better than none.
Is it really reasonable to write tests at the early stage?
No if you are writing top-down-design high level integrationtests that require a real database or internetconnection to an other website to work
yes if you are implementing bottom-up with unittesting (=testing a module in isolation)
The higher the "level" the more difficuilt the unittesting becomes because you have to introduce more mocking/abstraction.
In my opinion the architectual benefits of tdd only apply when combined with unittesting because this drives the Separation_of_concerns
When i started tdd i had to rewrite many tests when changing the api/architecture. With grown experience today there are only a few cases where this is neccessary.
You should have a first layer of tests that verifies the externally visible behavior of your API regardless of its internals.
Updating this kind of tests when a new functional requirement emerges is not a problem. In the example you mention, it would be easy to adjust to new websites being scraped - you would just add new assertions to the tests to account for the new data fetched.
The fact that "the scraping code had to be revamped entirely" shouldn't affect the structure of these higher level tests, because from the outside, the API should be consumed exactly the same way as before.
If such a low-level technical detail does affect your high level tests, you're probably missing an abstraction that describes what data you get but hides the details of how it is retrieved.
Writing tests before you write the actual code would mean you know how your application will be designed. This is rarely the case.
As a matter of fact I for example start writing everything in a single file. It might have a few hundereds or more lines. This way I can easily and quickly redesign the api. Later when I decide I like it and that it's good I start refactoring it by putting everything in meaningfull namespaces and separate files.
When this is done I start writing tests to verify everything works fine and to find bugs.
TDD is just a myth. It is not possible to write tests first and the code later especially if you are at the beginning.
You always have to keep in mind the KISS rule. If you need some crazy stuff to test you own code like fakes or mocks you already failed it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
A common story
Story: User logging in
As a user
I want to login with my details
So that I can get access to the site
Given such a broad coverage, it is useless if I mock the system components such as DB in order to perform the test, so can I say that people mainly use BDD in integration test?
Here's my terminology.
Scenario: an example of the user using the system, with all relevant components in place rather than mocked out. May be automated and used as an acceptance test, but the conversations between business, testers and devs are the most important aspect of BDD. Often created using the Given / When / Then template, sometimes in tools which allow for natural language capture such as Cucumber or JBehave.
Integration test: Crosses the boundary of two components, and usually used to check the integrity of integration of those components. For instance, may be used to send messages back and forth between the client and server layers of a web interface, or to check database bindings with Hibernate, etc.. Does not necessarily involve the full stack. A scenario could be considered a particular kind of integration test. BDD doesn't really apply for most non-scenario integration tests, though you could still conceivably use the Given / When / Then template.
Unit test: An example of a consuming class using another class, usually with collaborators mocked out. May also be an example of how a consuming class delegates work to its collaborators. That's how we talk about it in BDD, anyway (you can do BDD at both levels). Can also use the Given / When / Then syntax.
Story: A slice through a feature to allow us to get faster feedback. The behavior of a feature may be illustrated with several scenarios, and these can also be used to help slice up the feature. Often illustrated with the As a... I want... So that... template, or the In order to... as a... I want... template of Feature Injection.
Feature: Features represent the way in which users will use the capabilities we’re giving them. This is the stage in which we start defining the concrete implementation and UI. A feature may be a web page, part of a web page, a module in a windows UI, part of an app, etc.
Capability: Something a user can achieve with the system, or which the system can achieve. Ie: A user can book a trade, the system is secure enough to withstand hackers. Phrasing scenarios at this level helps them be independent of the UI and keeps them in the language of the business.
Hope this helps.
Your example is a user story, which describes acceptance test. Acceptance tests could have end-to-end scope, but not necessarily. Core difference between acceptance and integration tests, is what they are focused on. Acceptance test is business-focused and could be written/read by non-technical person (customer). On the other hand we have development-focused integration tests, which simply verify that two or more components could work together.
Back to BDD. It could be used in acceptance testing (feature level) and unit testing (code level). There are even different tools for different levels of BDD:
SpecFlow (acceptance testing)
NSpec, NBehave (unit testing)
Behaviour Driven Development is thinking about the behaviour of a product in a given scenario. It extends both Test Driven Development and Domain Driven Design. Also BDD is thinking beyond integration test. BDD is about maximizing the communication between the Users, Developers, Testers, Managers and Analysts.
Integration Testing is considered as a step of BDD. Integration testing can also exist out of the context of BDD. As integration testing can be used to cover high-level behaviour of your application without dropping into the unit testing.
Behaviour is about the interactions between components of the system and so the use of mocking is fundamental to advanced TDD. Expertise in TDD begins to dawn at the point where the developer realizes that TDD is about defining behaviour rather than testing.
A user story may have a broad scope, as it is always a priority of developing human friendly software. It combines the pragmatic approach of Extreme Programming with Enough Up Front Thinking based on Macro Level Analysis to enable Macro Level Planning.
Integration Testing is what we are using BDD for mainly - UI tests with Selenium. Although actually we are not mocking anything with these tests as the BDD Scenarios are used to drive SpecFlow to in turn drive Selenium Webdriver to perform user-journeys such as logging in, clicking menu links, creating records. In fact I'm trying my hardest to do everything through the UI where possible.
I have been working towards and with the Business Analysts to write their user stories in a BDD fashion (in fact it is now in our contract with clients) and it has been very refreshing and useful to find that during the writing of stories in a BDD fashion, we discover edge-cases that might not otherwise have been thought when we extrapolate the requirements into atomic steps (Given, When, Then). It truly is a win-win scenario for both the business and the developers' perspective when we have a more common language to express requirements.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Let's say I'm starting a new project, quality is a top priority.
I plan on doing extensive unit testing, what's important to keep in mind when I'm working on the architecture to ease and empower further unit testing?
edit : I read an article some times ago (I can't find it now) talking about how decoupling instantiation code from classes behaviors could be be helpful when unit testing. That's the kind of design tips I'm seeking here.
Ease of testing comes through being able to replace as many of the dependencies your method has with test code (mocks, fakes, etc.) The currently recommended way to accomplish this is through dependency inversion, aka the Hollywood Principle: "Don't call us, we'll call you." In other words your code should "ask for things, don't look for things."
Once you start thinking this way you'll find code can easily have dependencies on many things. Not only do you have dependencies on other objects, but databases, files, environment variables, OS APIs, globals, singletons, etc. By adhering to a good architecture, you minimize most of these dependencies by providing them via the appropriate layers. So when it comes time to test, you don't need a working database full of test data, you can simply replace the data object with a mock data object.
This also means you have to carefully sort out your object construction from your object execution. The "new" statement placed in a constructor generates a dependency that is very hard to replace with a test mock. It's better to pass those dependencies in via constructor arguments.
Also, keep the Law of Demeter in mind. Don't dig more than one layer deep into an object, or else you create hidden dependencies. Calling Flintstones.Wilma.addChild(pebbles); means what you thought was a dependence on "Flintstones" really is a dependence on both "Flintstones" and "Wilma".
Make sure that your code is testable by making it highly cohesive, lowly decoupled. And make sure you know how to use mocking tools to mock out the dependencies during unit tests.
I recommend you to get familiar with the SOLID principle, so that you can write a more testable code.
You might also want to check out these two SO questions:
Unit Test Adoption
What Should Be A Unit
Some random thoughts:
Define your interfaces: decouple the functional modules from each other, and decide how they will communicate with each other. The interface is the “contract” between the developers of different modules. Then, if your tests operate on the interfaces, you're ensuring that the teams can treat each other's modules as black boxes, and therefore work independently.
Build and test at least the basic functionality of the UI first. Once your project can “talk” to you, it can tell you what's working and what's not ... but only if it's not lying to you. (Bonus: if your developers have no choice but to use the UI, you'll quickly identify any shortcomings in ease-of-use, work flow, etc.)
Test at the lowest practical level: the more confident you are that the little pieces work, the easier it will be to combine them into a working whole.
Write at least one test for each feature, based on the specifications, before you start coding. After all, the features are the reason your customers will buy your product. Be sure it's designed to do what it's supposed to do!
Don't be satisfied when it does what it's supposed to do; ensure it doesn't do what it's not supposed to do! Feed it bad data, use it in an illogical way, disconnect the network cable during data transfer, run it alongside conflicting applications. Your customers will.
Good luck!
Your tests will only ever be as good as your requirements. They can be requirements that you come up with up front all at once, they can be requirements that you come up with one at a time as you add features, or they can be requirements that you come up with after you ship it and people start reporting a boat load of bugs, but you can't write a good test if no one can or will document exactly what the thing is supposed to do.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
People at my company see unit testing as a lot of extra work, that offers fewer benefits than existing functional tests. Are unit and integration tests worth it? Note a large existing codebase that wasn't designed with testing in mind.
Most people are unaware what automated unit tests are for:
To experiment with a new technology
To document how to use a part of the code
To make sure a dead bug stays dead
To allow you to refactor the code
To allow you to change any major parts of the code
Create a lower watermark below which the quality of your product cannot possibly drop
Increase development speed because now, you know that something works (instead of hoping it does until a customer reports a bug).
So if any of these reasons bring you a benefit, automated unit tests are for you. If not, then don't waste your time.
(I'm assuming that you're using "functional test" to mean a test involving the whole system or application being up and running.)
I would unit test new functionality as I wrote it, for three reasons:
It helps me get to working code quicker. The turnaround time for "unit test failed, fix code, unit test passed" is generally a lot shorter than "functional test failed, fix code, functional test passed".
It helps me to design my code in a cleaner way
It helps me understand my code and what it's meant to be doing when I come to maintain it. If I make a change, it will give me more confidence that I haven't broken anything.
(This includes bug fixes, as suggested by Epaga.)
I would strongly recommend Michael Feathers' "Working Effectively with Legacy Code" to give you tips on how to start unit testing a codebase which wasn't designed for it.
It depends on whether your functional tests are automated or done manually. If it's the latter, then any kind of automated test suite is useful since the cost of running those unit / integration tests is far lower than running manual functional tests. You can show real ROI there. I would recommend starting with writing some integration tests and if time / budget allows in the future, take a look at unit testing then.
Retroactively writing unit tests for legacy code can very often NOT be worth it. Stick with functional tests, and automate them.
Then what we've done is have the guideline that any bug fixes (or new features) must be accompanied by unit tests at least testing the fix. That way you get the project at least going in the right direction.
And I have to agree with Jon Skeet (how could I not?) in recommending "Working Effectively With Legacy Code", it really was a helpful skim/read.
As it happens, I read a paper last night on this very subject. The authors compare projects within four groups at Microsoft and IBM, contrasting, in hindsight, projects which used both unit testing and functional testing and projects which used functional testing alone. To quote the authors:
"The results of the case studies
indicate that the preview release
defect density of the four products
decreased between 40% and 90% relative
to similar projects that did not use
the TDD practice. Subjectively, the
teams experienced a 15 to 35% increase
in initial development time after
adopting TDD."
This indicates that it is certainly worth doing unit testing when you add new functionality to your project.
Yes they are worth it, I am now faster at coding since I started unit testing my code. I spend less time fixing bugs and more time thinking about what my code should do.
One application I was bought in to consult on the FAT (test)ing of consisted of a 21,000 lines switch statement. Most units of functionality were a few dozen to a couple of hundred lines in a case statement. The application was built in several variants, so there were many #ifdef sections of the switch.
It was not designed for unit testing - it was not factored at all.
( It was designed in the sense there was a definite, easy to comprehend architecture - malloc a struct, send the main loop a user message with the pointer to the struct as the lparam and then free it when the message is processed. But form did not follow function, which is the central tenet of good design. )
To add unit testing to new functionality would mean a major break with the pattern; either you would need to put your code somewhere other than the big switch, and double the complexity of the variant selection mechanism, or make a large amount of scaffolding to put the messages in the queue to trigger the new functionality.
So though it's certainly desirable to unit test new functionality, it's not always practical if a system isn't already well factored. Either there's a significant amount of work to refactor the system to allow unit testing, or you end up bench-testing the code and cut and pasting it into the existing framework - and a copy of unit tested code isn't unit tested code.
You test when you want to know something about something. If you know that your product (system, unit, service, component...) is going to work, then there's no need to test it. If you're uncertain as to whether it will work, you probably have some questions about it. Whether those questions are worth answering is a matter of risk and priorities.
If you're sure that your product will work, and you don't have any questions about it, there is still one question that's worth asking: why don't I have any questions?
---Michael B.
Unit testing indeed is extra work but it pays off in the long run. Here are the
advantages over integration testing :
you get a regression suite that acts as a safety net in case of refactoring - the same can be said of integration tests, although it can be tough to say if
the test covers a piece of code.
unit tests give an immediate feedback when modifying the code - and this
feedback can be very accurate, pointing to the method where the anomaly
is.
those tests are cheap to run : they run very fast (a few seconds typically),
without any installation or deployment, just compile and test. So they can be run often.
it is easy to add a new test to reproduce a problem once it is identified,
and it augments the regression suite, or to answer a question ("what happen if
this function is not called with a null parameter ...").
There clearly is some overlap between the two, but they are complementary as they both offer advantages.
Now, like any software engineering process, testing has to be taylored according
to the project needs.
With a large legacy codebase, legacy in the sense of not unit tested, I would
recommend to restrict unit tests to new features added to the code as
unit tests can be hard to introduce. In this regard,
I can only second (third ?) the recommendation of the "Working
Effectively with legacy code" book to help bringing unit testing in an existing
codebase.