I have often seen tests where canned inputs are fed into a program, one checks the outputs generated against canned (expected) outputs usually via diff. If the diff is accepted, the code is deemed to pass the test.
Questions:
1) Is this an acceptable unit test?
2) Usually the unit test inputs are read in from the file system and are big
xml files (maybe they represent a very large system). Are unit tests supposed
to touch the file system? Or would a unit test create a small input on the fly
and feed that to the code to be tested?
3) How can one refactor existing code to be unit testable?
Output differences
If your requirement is to produce output with certain degree of accuracy, then such tests are absolutely fine. It's you who makes the final decision - "Is this output good enough, or not?".
Talking to file system
You don't want your tests to talk to file system in terms of relying on some files to exists somewhere in order for your tests to work (for example, reading values from configuration files). It's a bit different with tests input resources - you can usually embed them in your tests (or at least test project), treat them as part of codebase, and on top of that they usually should be loaded before test executes. For example, when testing rather large XMLs it's reasonable to have them stored as separete files, rather than strings in code files (which sometimes can be done instead).
Point is - you want to keep your tests isolated and repeatable. If you can achieve that with file being loaded at runtime - it's probably fine. However it's still better to have them as part of codebase/resources than standard system file lying somewhere.
Refactoring
This question is fairly broad, but to put you in the right direction - you want to introduce more solid design, decouple objects and separate responsibilities. Better design will make testing easier and, what's most important - possible. Like I said, it's broad and complex topic, with entire books dedicated to it.
1) is this an acceptable unit test?
This is not a unit test by definition. A unit test focuses on the smallest possible amount of code. Your test can still be a useful test, regression test, self-documenting test, TDD test, etc. It is just not a unit test, although it may be equally useful.
2) Are unit tests supposed to touch the file system?
Typically not, unless you need to unit test something explicitly related to the filesystem. One reason is, if you have several hundred unit tests it is nice to have them run in a couple seconds rather than minutes.
3) How can one refactor existing code to be unit testable?
A better question is why do you want the code to be unit testable? If you are trying to learn TDD it is better to start with a new project. If you have bugs then try to write tests for the bugs. If the design is slowing you down then you can refactor towards testability over time.
Addressing only the 3rd question. It is extremely difficult. You really need to write tests at the same time you write the code, or before. It is a nightmare to try to slap tests onto an existing code base, and it is often more productive to throw away the code and start over.
This is an acceptable unit test.
The files being read should be part of the test project so anyone that checks out the project from repository will have the same files at the same relative location.
Having black box tests is a great start, you can refactor the existing code and use the current tests to verify that it is still working (depending on the quality of the tests). Here is a short blog about refactoring for testability: http://www.beletsky.net/2011/02/refactoring-to-testability.html
A diff test can be acceptable as a Unit Tests, especially when your using test data that is shared between Unit Tests.
If you don't know how many items there are in the SUT you could use the following:
int itemsBeforeTest = SUT.Items.Count;
SUT.AddItem();
Assert.AreEqual(itemsBeforeTest + 1, SUT.Items.Count);
If a Unit Tests requires so much data that it needs to be read from a big XML file, it's not a real Unit Test. A Unit Test should test a class in complete isolation and mock out all dependencies.
Using a pattern like the Builder pattern, can also help in creating test data for your unit test. The biggest problem with having your test data in a separate file, is that it's hard to understand what the test does exactly. If you create your test data in the arrange part of your unit test, it's immediately clear what is important for your test.
For example, let's say you have the following arrange code to test if the price of an invoice is correct:
Address billingAddress = new Address("Stationsweg 9F",
"Groningen", "Nederland", "9726AE"); shippingAddress = new Address("Aweg 1",
"Groningen", "Nederland", "9726AB");
Customer customer = new Customer(99, "Piet", "Klaassens",
30,
billingAddress,
shippingAddress);
Product product = new Product(88, "Tafel", 19.99);
Invoice invoice = new Invoice(customer);
Can be changed to the following when using a Builder
Invoice invoice = Builder<Invoice>.CreateNew()
.With(i => i.Product = Builder<Product>.CreateNew()
.With(p => p.Price = 19.99)
.Build())
.Build();
When using a Builder its much easier to see what is important and your code is also more maintainable.
Refactoring code to become more testable is a broad topic. It comes down to thinking about 'how would I test this code?' while you are writing the code.
Take the following example:
public class AlarmClock
{
public AlarmClock()
{
SatelliteSyncService = new SatelliteSyncService();
HardwareClient = new HardwareClient();
}
}
This is hard to test. You need to make sure that both the SatteliteSyncService and the HardwareClient are functional when testing the AlarmClock.
This change tot the constructor makes it much easier to test:
public AlarmClock(IHardwareClient hardwareClient, ISatelliteSyncService satelliteSyncService)
{
SatelliteSyncService = satelliteSyncService;
HardwareClient = hardwareClient;
}
Techniques like Dependency Injection help with refactoring your code to be more testable. Also watch out for static values like DateTime.Now or the use of a Singleton because they are hard to test.
A really good introduction to writing testable code can be found here.
You should not require the code to be refactored to be able to perform unit tests. Unit tests, as the name implies, are testing a unit of code for the system. The best unit tests are small, quick to execute and exercise only a very small subset of the piece of code being tested (e.g. class).
The reason for having small, compact unit tests that only exercise one part of the code is that the objective of unit tests is to find bugs in that unit of code. If the unit test takes a long time to execute and tests lots of things it makes the identification of a bug in the code that much harder.
As to accessing the file system, I see no problem. Some unit tests may require a database to be constructed before the test is carried out, the output to be checked that would be difficult or time expensive to write the checks in code.
The files for unit testing should be treated like the rest of the code - put under version control. If you are paranoid you could implement a check within the unit test such as perform a MD5 on it and check against a hard coded value so future reruns of the test can verify that the test data has not inadvertenly changed.
Just my humble thoughts.
Related
I am not that good with "unittest" topic. I'd liked to create a unit test in order to say "Hey body, this is the wrong (or right) answer because blabla!". I need to place a unit test because it took 3 MF weeks to find why the prediction in the machine learning model did not work! Hence I want to avoid that type of error in the future.
Questions :
How can I ask the code to alert me when len(X) - len(pred_values) is not equal to num_step?
Do I need to create a unit test file to gather all the unit tests, e.g. unittest.py?
Do we need to place the unit tests away from the main code?
1.
The test code can alert you by means of an assertion. In your test, you can use self.assertEqual()
self.assertEqual(len(X) - len(pred_values), num_step)
2.
Yes you would normally gather your TestCase classes in a module prefixed with test_. So if the code under test lives in a module called foo.py, you would place your tests in test_foo.py.
Within test_foo.py you can create multiple TestCase classes that group related tests together.
3.
It is a good idea to separate the tests from the main code, although not mandatory. Reasons why you might want to separate the tests include (as quoted from the docs):
The test module can be run standalone from the command line.
The test code can more easily be separated from shipped code.
There is less temptation to change test code to fit the code it tests without a good reason.
Test code should be modified much less frequently than the code it tests.
Tested code can be refactored more easily.
Tests for modules written in C must be in separate modules anyway, so why not be consistent?
If the testing strategy changes, there is no need to change the source code.
Lots more info in the official docs.
Scenario: I need to write a complex nHibernate query, that would return projected DTO, but I want to use TDD approach. The method would look like this:
public PrintDTO GetUsersForPrinting(int userId)
{
Session.QueryOver<User>().//some joins, conditions etc.
//returns projected dto
}
Questions:
Since the most common approach is to use in memory database for this kind of operations. Should I write integration test?
If I am using in memory db can I write Unit tests?
Is one test is enough?
Since my integration test probably will check projection, how should I name it? "GetUserForPrinting_return_correct_DTO" seems too abstract and silly.
I ask because:
There is lots of abstract information about TDD and integration testing, but when it comes to concrete implementation it is very difficult to apply that information.
TDD suggests that integration test should be made of unit tests:
This is not really a very good problem to learn TDD with. I assume you don't already know what the complex query looks like, and you want to use test-driven techniques to drive it out. Awesome :)
But let's see if I can answer your questions.
Yes
any test that includes a real db, whether it is in-memory or on-disk, is not a unit test. A unit test would use a mock db.
Maybe - if you query is complex enough, then no.
testGetUsersForPrinting or getUsersForPrintingTest or similar
Most probably I would drive out the query in a SQL interpreter, not in code. The aim would be to produce a series of integration tests against an in-memory db based on what I learn during this process.
Start from the minimum possible DTO you can think of, and build up from there.
Finally convert the query into nhibernate calls, then make the integration tests pass.
Test-driven, but not really unit-test-driven.
If you are willing to accept maximum TDD discipline and deal with working slower and being more annoyed than usual, you can automate each integration test as you develop it and write code to make it pass. This will mean you are switching frequently among 3 levels of abstraction / editors / environments (direct SQL queries, integration tests, c# code) - I deal with this by setting up techniques to force myself to follow the right steps each time.
This last bit is why this is not a good problem to learn TDD with. You will need a lot of discipline you probably haven't forced yourself to acquire yet!
Good luck.
ok some concrete examples. I would modify your code sample to look like this
public PrintDTO GetUsersForPrinting(int userId, ISession session)
{
var data = session.QueryOver<User>().//some joins, conditions etc.
return data; // or whatever
}
In your unit test you would write
public testDTO()
{
//Arrange
StubSession session = .... setup a stub session, which returns hardcoded values
// Act
PrintDTO users = GetUsersForPrinting(111, session);
// Assert
Assert.That(users.size(), Is.EqualTo(1));
Assert.That(users.get(0).userId, Is.EqualTo(111));
}
In your integration test, you would use a real db, and your session object would actually connect to it, and the queries would be resolved against that db
Arrange-Act-Assert is a standard method for organizing unit tests.
Generally you want as few Asserts as possible in a unit test. And you will have multiple unit tests.
When you are writing a unit test, start by writing the Assert, then fill in the rest to make it compile/get the result you want. Make the test fail first, because then you know you have really delivered something when it passes.
In this example to implement a stub ISession you would derive a local StubSession class (only visible to the test suite) from ISession and just fill in the absolute minimum to get it to compile, and return the minimum data to get the test to pass.
To build up to your whole DTO - assuming you know what you want in your DTO - proceed, as you say in the comments, incrementally. Build up each part of your DTO
a piece at a time, add a unit test for each piece.
Keeping track of this is another piece of TDD discipline.
Set yourself up with a TODO list - just a simple text file, or possibly a lengthy comments at the start of your test suite. List all the things you want to test e.g. zero results, one result, two results, 20 results. User id, whatever other pieces of information you need to have.
If you are doing a complex query across tables or whatever add an todo item for each join, each part of the where clause, etc.
Add items for ordering and paging etc if you are using those.
Pick the simplest things first. Only do one small thing (in a single red-green-refactor cycle) at a time. As you work through your list, you might want to break items up into smaller pieces, or you might think of additional things you need to do. Add them to the TODO list rather than working directly on them.
In this particular case I would swap - after each red-green-refactor cycle - into the SQL environment and/or the sqlite integration test to work out how to make the next piece work. I guess this is a sort of step between red and green - choose what you will test next, write the test (which fails obviously), fiddle around in SQL until you know how to make it pass, write the nHibernate calls to make your test green, then refactor.
Be aware some of the things you list might run out not to be necessary, or take too long, etc. It's good to write them down still, so you know what are not doing as well as what you are doing. Keep focused on your goal.
I tend to also develop a list of "smells" and/or refactorings that I can see I will want to do but am not quite ready for this cycle. Remember to minimise duplication/refactor your tests as well as your SUT (System Under Test).
It's a doing rather then seeing thing. The list of what unit tests you end up with, and the code they exercise, is not a very good description of the journey. Kent Beck's original TDD book is slim and will give you some good overall pointers, but not really about constructing queries.
Does any of that help?
Since the most common approach is to use in memory database for this kind of operations. Should I write integration test?
Using in memory database still is an integration test (because it actually tests if your query generates correct SQL and execute it against a database, see).
If I am using in memory db can I write Unit tests?
No, it would be an integration test
Is one test is enough?
Probably not, you should check each condition of your query, for example one test per one where clause, one for paging and one for sorting if applicable.
Since my integration test probably will check projection, how should I
name it? "GetUserForPrinting_return_correct_DTO" seems too abstract
and silly.
GivenUserForPrinting_WhenGetUserForPrinting_ThenMapToDTO would be a better naming
I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.
What is the difference between unit tests and functional tests? Can a unit test also test a function?
Unit tests tell a developer that the code is doing things right; functional tests tell a developer that the code is doing the right things.
You can read more at Unit Testing versus Functional Testing
A well explained real-life analogy of unit testing and functional testing can be described as follows,
Many times the development of a system is likened to the building of a house. While this analogy isn't quite correct, we can extend it for the purposes of understanding the difference between unit and functional tests.
Unit testing is analogous to a building inspector visiting a house's construction site. He is focused on the various internal systems of the house, the foundation, framing, electrical, plumbing, and so on. He ensures (tests) that the parts of the house will work correctly and safely, that is, meet the building code.
Functional tests in this scenario are analogous to the homeowner visiting this same construction site. He assumes that the internal systems will behave appropriately, that the building inspector is performing his task. The homeowner is focused on what it will be like to live in this house. He is concerned with how the house looks, are the various rooms a comfortable size, does the house fit the family's needs, are the windows in a good spot to catch the morning sun.
The homeowner is performing functional tests on the house. He has the user's perspective.
The building inspector is performing unit tests on the house. He has the builder's perspective.
As a summary,
Unit Tests are written from a programmers perspective. They are made to ensure that a particular method (or a unit) of a class performs a set of specific tasks.
Functional Tests are written from the user's perspective. They ensure that the system is functioning as users are expecting it to.
Unit Test - testing an individual unit, such as a method (function) in a class, with all dependencies mocked up.
Functional Test - AKA Integration Test, testing a slice of functionality in a system. This will test many methods and may interact with dependencies like Databases or Web Services.
A unit test tests an independent unit of behavior. What is a unit of behavior? It's the smallest piece of the system that can be independently unit tested. (This definition is actually circular, IOW it's really not a definition at all, but it seems to work quite well in practice, because you can sort-of understand it intuitively.)
A functional test tests an independent piece of functionality.
A unit of behavior is very small: while I absolutely dislike this stupid "one unit test per method" mantra, from a size perspective it is about right. A unit of behavior is something between a part of a method and maybe a couple of methods. At most an object, but not more than one.
A piece of functionality usually comprises many methods and cuts across several objects and often through multiple architectural layers.
A unit test would be something like: when I call the validate_country_code() function and pass it the country code 'ZZ' it should return false.
A functional test would be: when I fill out the shipping form with a country code of ZZ, I should be redirected to a help page which allows me to pick my country code out of a menu.
Unit tests are written by developers, for developers, from the developer's perspective.
Functional tests may be user facing, in which case they are written by developers together with users (or maybe with the right tools and right users even by the users themselves), for users, from the user's perspective. Or they may be developer facing (e.g. when they describe some internal piece of functionality that the user doesn't care about), in which case they are written by developers, for developers, but still from the user's perspective.
In the former case, the functional tests may also serve as acceptance tests and as an executable encoding of functional requirements or a functional specification, in the latter case, they may also serve as integration tests.
Unit tests change frequently, functional tests should never change within a major release.
TLDR:
To answer the question: Unit Testing is a subtype of Functional Testing.
There are two big groups: Functional and Non-Functional Testing. The best (non-exhaustive) illustration that I found is this one (source: www.inflectra.com):
(1) Unit Testing: testing of small snippets of code (functions/methods). It may be considered as (white-box) functional testing.
When functions are put together, you create a module = a standalone piece, possibly with a User Interface that can be tested (Module Testing). Once you have at least two separate modules, then you glue them together and then comes:
(2) Integration Testing: when you put two or more pieces of (sub)modules or (sub)systems together and see if they play nicely together.
Then you integrate the 3rd module, then the 4th and 5th in whatever order you or your team see fit, and once all the jigsaw pieces are placed together, comes
(3) System Testing: testing SW as a whole. This is pretty much "Integration testing of all pieces together".
If that's OK, then comes
(4) Acceptance Testing: did we build what the customer asked for actually? Of course, Acceptance Testing should be done throughout the lifecycle, not just at the last stage, where you realise that the customer wanted a sportscar and you built a van.
"Functional test" does not mean you are testing a function (method) in your code. It means, generally, that you are testing system functionality -- when I run foo file.txt at the command line, the lines in file.txt become reversed, perhaps. In contrast, a single unit test generally covers a single case of a single method -- length("hello") should return 5, and length("hi") should return 2.
See also IBM's take on the line between unit testing and functional testing.
According to ISTQB those two are not comparable. Functional testing is not integration testing.
Unit test is one of tests level and functional testing is type of testing.
Basically:
The function of a system (or component) is 'what it does'. This is
typically described in a requirements specification, a functional
specification, or in use cases.
while
Component testing, also known as unit, module and program testing,
searches for defects in, and verifies the functioning of software
(e.g. modules, programs, objects, classes, etc.) that are separately
testable.
According to ISTQB component/unit test can be functional or not-functional:
Component testing may include testing of functionality and specific non-functional characteristics such as resource-behavior (e.g. memory leaks), performance or robustness testing, as well as structural testing (e.g. decision coverage).
Quotes from Foundations of software testing - ISTQB certification
In Rails, the unit folder is meant to hold tests for your models, the functional folder is meant to hold tests for your controllers, and the integration folder is meant to hold tests that involve any number of controllers interacting. Fixtures are a way of organizing test data; they reside in the fixtures folder. The test_helper.rb file holds the default configuration for your tests.
u can visit this.
very simply we can say:
black box: user interface test like functional test
white box: code test like unit test
read more here.
AFAIK, unit testing is NOT functional testing. Let me explain with a small example. You want to test if the login functionality of an email web app is working or not, just as a user would. For that, your functional tests should be like this.
1- existing email, wrong password -> login page should show error "wrong password"!
2- non-existing email, any password -> login page should show error "no such email".
3- existing email, right password -> user should be taken to his inbox page.
4- no #symbol in email, right password -> login page should say "errors in form, please fix them!"
Should our functional tests check if we can login with invalid inputs ? Eg. Email has no # symbol, username has more than one dot (only one dot is permitted), .com appears before # etc. ? Generally, no ! That kind of testing goes into your unit tests.
You can check if invalid inputs are rejected inside unit tests as shown in the tests below.
class LoginInputsValidator
method validate_inputs_values(email, password)
1-If email is not like string.string#myapp.com, then throw error.
2-If email contains abusive words, then throw error.
3-If password is less than 10 chars, throw error.
Notice that the functional test 4 is actually doing what unit test 1 is doing. Sometimes, functional tests can repeat some (not all) of the testing done by unit tests, for different reasons. In our example, we use functional test 4 to check if a particular error message appears on entering invalid input. We don't want to test if all bad inputs are rejected or not. That is the job of unit tests.
The way I think of it is like this: A unit test establishes that the code does what you intended the code to do (e.g. you wanted to add parameter a and b, you in fact add them, and don't subtract them), functional tests test that all of the code works together to get a correct result, so that what you intended the code to do in fact gets the right result in the system.
UNIT TESTING
Unit testing includes testing of smallest unit of code which usually are functions or methods. Unit testing is mostly done by developer of unit/method/function, because they understand the core of a function. The main goal of the developer is to cover code by unit tests.
It has a limitation that some functions cannot be tested through unit tests. Even after the successful completion of all the unit tests; it does not guarantee correct operation of the product. The same function can be used in few parts of the system while the unit test was written only for one usage.
FUNCTIONAL TESTING
It is a type of Black Box testing where testing will be done on the functional aspects of a product without looking into the code. Functional testing is mostly done by a dedicated Software tester. It will include positive, negative and BVA techniques using un standardized data for testing the specified functionality of product. Test coverage is conducted in an improved manner by functional tests than by unit tests. It uses application GUI for testing, so it’s easier to determine what exactly a specific part of the interface is responsible for rather to determine what a code is function responsible for.
Test types
Unit testing - In Procedural programming unit is a procedure, in Object oriented programming unit is a class. Unit is isolated and reflects a developer perspective
Functional testing - more than Unit. User perspective, which describes a feature, use case, story...
Integration testing - check if all separately developed components work together. It can be other application, service, library, database, network etc.
Narrow integration test - double[About] is used. The main purpose is to check if component is configured in a right way
Broad integration test (End to End test, System test) - live version. The main purpose is to check if all components are configured in a right way
UI testing - checks if user input triggers a correct action and the UI is changed when some actions are happened
...
Non functional testing - other cases
Performance testing - calculate a speed and other metrics
Usability testing - UX
...
[iOS tests]
[Android tests]
Unit Test:-
Unit testing is particularly used to test the product component by component specially while the product is under development.
Junit and Nunit type of tools will also help you to test the product as per the Unit.
**Rather than solving the issues after the Integration it is always comfortable to get it resolved early in the development.
Functional Testing:-
As for as the Testing is concerned there are two main types of Testing as
1.Functional Test
2.Non-Functional Test.
Non-Functional Test is a test where a Tester will test that The product will perform all those quality attributes that customer doesn't mention but those quality attributes should be there.
Like:-Performance,Usability,Security,Load,Stress etc.
but in the Functional Test:- The customer is already present with his requirements and those are properly documented,The testers task is to Cross check that whether the Application Functionality is performing according to the Proposed System or not.
For that purpose Tester should test for the Implemented functionality with the proposed System.
Unit testing is usually done by developers. The objective of doing the same is to make sure their code works properly. General rule of thumb is to cover all the paths in code using unit testing.
Functional Testing: This is a good reference. Functional Testing Explanation
(Leaving aside hair-splitting about if this is integration-testing or unit-testing.)
I would rather first test at the large scale. If my app writes a VRML file that is the same as the reference one then the VRML exporter works, I don't then have to run unit tests on every single statement in the code. I would also like to use this do some level of poor-man gui testing by comparing screenshots.
Is there a unit test framework (for C++ ideally) that integrates this sort of testing - or at least makes it easy to integrate with unit tests?
edit. It seems a better term is approval testing. So are there any other unit test frameworks that incorporate Approval Testing ?
Have a look at Approval Tests, written by a couple of friends of mine. Not C++, but it's the general idea of what you're after, also known as Golden Master tests. Good stuff, whether it's unit tests or not.
Kitware, for VTK, uses CDash to do most of its testing. Many of its tests are similar in nature to this - they write out an image of the rendered model, and compare it to a reference image.
In addition, they have code in there to specifically handle very subtle differences to the reference image due to different graphics card drivers/manufacturers/etc. The tests can be written in a way to compare the reference image with some tolerance.
Okay, I think you're making an incorrect assumption about the nature of unit test code; your statement that
If my app writes a VRML file that is
the same as the reference one then the
VRML exporter works, I don't then have
to run unit tests on every single
statement in the code.
is strictly correct if you're looking to do a validation test on your code, but note that this type of test is strictly different than what a unit test actually is. Unit tests are for testing individual units of code; they do not exist for verification purposes. Depending on your environment, you may not need unit tests at all, but please keep in mind that validation tests (testing the validity of the overall program output) and unit tests (testing that the individual code units work as expected) are completely different things.
(Note that I'm really not trying to be nitpicky about this; also, you can use plenty of unit test frameworks to achieve this result; keep in mind, though, that what you're writing aren't really "Unit Tests", despite running them in a Unit Test framework.)