I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.
Related
I don't understand how I'm testing anything with unit testing.
Suppose I am testing that my repository class can retrieve values from the database correctly. The proper way to do this would be to actually call the real database and retrieve and check those values.
But the idea behind unit testing is that it should be done in isolation, and connecting to a running database is not isolation. So what is usually done is to mock or stub the database.
But why would testing on a fake database with hardcoded data and hardcoded return values even test anything? It seems tautological and a waste of time.
Or am I not understanding how to unit test properly?
Does one even unit test database calls?
I don't understand how I'm testing anything with unit testing.
Short answer: you are testing the logic, and leaving out the side effects.
You aren't testing everything; but you are testing something.
Furthermore, if you keep in mind that you aren't really testing the code with side effects, then you are motivated to arrange your code so that the pieces that actually depend on the side effect are small. The big pieces don't actually care where the data comes from, to those are easy to test.
So "something" can be "most things".
There is an impedance problem -- if your test doubles impersonate the production originals inadequately, then some of your test results will be inaccurate.
my philosophy is to test as little as possible to reach a given level of confidence
Kent Beck, 2008
One way of imagining "as little as possible" is to think in terms of cost -- we're aiming for a given confidence level, so we want to achieve as much of that confidence as we can using cheap unit tests, and then make up the difference with more expensive techniques.
Cory Benfield's talk Building Protocol Libraries the Right Way describes an example of the kind of separation we're talking about here. The logic of how to parse an HTTP message is separable from the problem of reading the bytes. If you make the complicated part easy to test, and the hard to test part too simple to fail, your chances of succeeding are quite good.
I think your concern is valid. For me, TDD is more of an evolutionary design practice than unit testing practice, but I'll save that for another discussion.
In your example, what we are really testing is that the logic contained within your individual classes is sound. By stubbing the data coming from the database you have a controlled scenario that you can ensure your code works for that particular scenario. This makes it much easier to ensure full test coverage for all data scenarios. You're correct that this really doesn't test the whole system end to end, but the point is to reduce the overall test maintenance costs and enable faster feedback.
My approach is to mock most collaborators at the unit test level, then write acceptance tests at the integration test level, which validates your system using real data. Because the unit tests with their mocked data allows you to test various data scenarios out, you only need to test a few of those scenarios using integration tests to feel confident that your code will perform as you expect.
You can test your code against actual database in isolation. Just create new database instance for every test, or execute tests synchronously one after another and clean database before next test.
But using actual database will make your tests slow, which will slow down your work, because you want quick feedback on what you are doing.
Do not test every class - test main feature logic, which can use many different classes and mock/stub only dependencies which makes tests slow.
Find your application boundaries and tests logic between them without mocking.
For example in trivial web api application boundaries can be:
- controller action -> request(input)
- controller action -> response(output)
- database -> side effect of received request.
Assume we live in perfect world where new database and web server setup will takes milliseconds. Then you will tests whole pipeline of your application:
1. Configure database for test
2. Send request to the web api server
3. Assert that response contains expected data
4. Assert that database state changed as expected
But in now days world your boundaries will be controller action and abstracted database access point. Which makes your test look like below:
1. Configure mocked database access point(repository)
2. Call controller action with given parameters
3. Assert that action returns expected result
4. Possibly assert that mocked repository received expected update arguments.
If your application have no logic, just read/update data from database - test with actual database or, if your database framework allows it, use database in-memory.
what is the different purpose of those both? I mean, in which condition I should do each of them?
as for the example condition. if you have the backend server and several front-end webs, which one you'll do?do-unit testing the backend server first or do-UI testing in the web UI first?
given the condition, the server and the front-end webs already exist, so it's not an iterative design to build along with (TDD)...
Unit testing aims to test small portions of your code (individual classes / methods) in isolation from the rest of the world.
UI testing may be a different name for system / functional / acceptance testing, where you test the whole system together to ensure it does what it is supposed to do under real life circumstances. (Unless by UI testing you mean usability / look & feel etc. testing, which is typically constrained to details on the UI.)
You need both of these in most of projects, but at different times: unit testing during development (ideally from the very beginning, TDD style), and UI testing somewhat later, once you actually have some complete end-to-end functionality to test.
If you already have the system running, but no tests, practically you have legacy code. Strive to get the best test coverage achievable with the least effort first, which means high level functional tests. Adding unit tests is needed too, but it takes much more effort and starts to pay back later.
Recommended reading: Working Effectively with Legacy Code.
Unit test should always be done. Unittests are there to provide proof that each UNIT (read: object) of your technical solution delivers the expected results. To put it very (maybe too) simple, user testing is there to verify that your system fulfills the needs and demands of the user.
Test pyramid [1] is important concept here, well described by Martin Fawler.
In short, tests that run end-to-end through the UI are: brittle and expensive to write. You may consider test recording tools [2] to speed recording and re-recording up. Disclaimer - I'm developer of such tool.
[1] https://martinfowler.com/articles/practical-test-pyramid.html
[2] https://anwendo.com
In addition to the accepted answer, today I just came up with this question of why not just programmatically trigger layout functions and then unit-test your logic around that as well?
The answer I got from a senior dev was: programmatically trigger layout functions will not be an absolute copy of the real user-experience. In the real world, the system will trigger many callbacks, like when the user of an app backgrounds or foregrounds the app. Obviously you can trigger such events manually and test again, but would you be sure you got all events in all sequences right?!
The real user-experience is one where user makes actual network calls, taps on screens, loads multiple screen on top of each other and at times you might get system callbacks. Callbacks which you forgot to mock that you didn't properly mock. In unit-tests you're mainly testing in isolation. In UI test, you setup the app, may have to login, etc. That stack you build is much more complex vs a unit-test. Hence it's better to not mix unit-testing with UI testing.
I am new to unit testing. Suppose I am building a web application. How do I know what to test? All the examples that you see are some sort of basic sum function that really has no real value, or at least I've never written a function to add to inputs and then return!
So, my question...on a web application, what are the sort of things that need tested?
I know that this is a broad question but anything will be helpful. I would be interested in links or anything that gives real life examples as opposed to concept examples that don't have any real life usage.
Take a look at your code, especially the bits where you have complex logic with loops, conditionals, etc, and ask yourself: How do I know if this works?
If you need to change the complex logic to take into account other corner cases then how do you know that the changes you introduce don't break the existing cases? This is precisely what unit testing is intended to address.
So, to answer your question about how it applies to web applications: suppose you have some code that lays out the page differently depending on the browser. One of your customers refuses to upgrade from IE6 and insists that you support that. So you unit test your layout code by simulating the connection string from IE6 and checking that the layout is what you expect.
A customer tells you they've found a security hole where using a particular cookie will give you administrator access. How do you know that you've fixed the bug and it doesn't happen again? Create a unit test for it, and run the unit tests on a daily basis so that you get an early warning if it fails.
You discover a bug where users with accents in their names get corrupted in the database. Abstract out the webform input from the database layer and add unit tests to ensure that (eg) UTF8 encoded data is stored in the database correctly and can be retrieved.
You get the idea. Anywhere where part of the process has a well-defined input and output is ideal for unit testing. Anything that doesn't is ideal for refactoring until it is well defined. Take a look at projects such as WebUnit, HTMLUnit, XMLUnit, CSSUnit.
The first part of testing is to write testable applications. Separate out as much functionality as possible from the UI. Refactor into smaller methods. Learn about dependency injection, and try using that to create methods that can take simple, throw-away input that produces known (and therefor testable) results. Look at mocking tools.
Infrastructure and data layer code is easiest to test.
Look at behavior-driven testing as well as test-driven design. For my money, behavior testing is better than pure unit testing; you can follow use-cases, so that tests match against expected usage patterns.
Unit testing means testing any unit of work, the smallest units of work are methods and functions., The art of unit testing is to define tests for a function that cannot just be checked by inspection, what unit test aims at is to test every possible functional requirement of a method.
Consider for example you have a login function, then there could be following tests that you could write for failures:
1. Does the function fail on empty username and password
2. Does the function fail on the correct username but the wrong password
3. Does the function fail on the correct password but the wrong username
The you would also write tests that the function would pass:
1. Does the function pass on correct username and password
This is just a basic example but this is what unit testing attempts to achieve, testing out things that may have been overlooked during development.
Then there is a purist approach too where a developer is first supposed to write tests and then the code to pass those tests (aka test driven development).
Resources:
http://devzone.zend.com/article/2772
http://www.ibm.com/developerworks/library/j-mocktest.html
If you're new to TDD, may I suggest a quick trip into the world of BDD? My experience is that the language really helps people pick up TDD more quickly. Particularly, I point you at this article, in which Dan North suggests "what to test":
http://blog.dannorth.net/introducing-bdd/
Note for transparency: I may be heavily involved in the BDD movement.
Regarding the classes to unit test in a web-app, I'd consider starting with controllers, domain objects if they have complex behaviour, and anything called "service", "manager", "helper" or "util". Please also try renaming any classes like this so that they are less generic and actually say what they do. Classes called "calculator" or "converter" are also good candidates, and you'll probably find more in the same package / folder.
There are a couple of good books which could help you too:
Martin Fowler, "Refactoring"
Michael Feathers, "Working Effectively with Legacy Code"
Good luck!
If you start out saying, "How do I test my web app?" that is biting off a lot at once, and it's going to be hard to see unit testing as providing any kind of benefit. I got into unit testing by starting with small pieces that were isolated, then writing libraries test-first, and only then building whole applications that were testable.
Generally a web app has a domain model, it has data access objects that do queries on a database and return domain objects, it has services that call the data access objects, and it has controllers that accept http requests and call the services.
Tests for the controllers will check that they call the right service method with the right parameters. Service objects can be mocks injected during test setup.
Tests for the services will check that they call the right data access objects and perform whatever logic they need to be performing. Data access objects can be mocks injected during test setup.
Tests for the data access objects will check that they perform the right database operation (query or update or whatever) by checking the contents of the database before and after. For dao tests you'll need a database, and a tool like DBUnit to pre-populate it before the test. Also your domain objects' getters and setters will get exercised with this test so you won't need a separate test for them.
Tests for the domain model will check that whatever domain logic you have encoded in them works (Sometimes you may not have any). If you design your domain model so it is not coupled to the database then the more logic you put in the domain model the better because it's easy to test. You shouldn't need any mocks for these tests.
For a web app the kind of tests you need to do are slightly different. Unit tests are tests which test a particular component of your program. For a web app, you would need to test that forms accept/reject the right inputs, that all links point to the right place, that it can cope with unexpected inputs etc. I'd have a look at Selenium if I were you, I've used it extensively in testing a number of sites: Selenium HQ
I don't have experience of testing web apps, but speaking generally: you unit test the smallest 'chunks' of your program possible. That means you test each function on an individual basis. Anything on a larger scale becomes an integration test.
Of course, there are going to be methods so simple that its not worth your time to write a test for them, but on the whole aim to test as great a proportion of your code as possible.
A rule of thumb is that if it is not worth testing it is not worth writing.
However, some things are very difficult to test, so you have the do some cost benefit analysis on what you test. If you initially aim for 70% code coverage, you will be on the right track.
If I'm having Data Access Layer (nHibernate) for example a class called UserProvider
and a Business Logic class UserBl, should I both test their methods SaveUser or GetUserById, or any other public method in DA layer which is called from BL layer. Is this a redundancy or a common practice to do?
Is it common to unit test DA layer, or that belongs to Integration test domain?
Is it better to have test database, or create database data during test?
Any help is appreciated.
There's no right answer to this, it really depends. Some people (e.g Roy Osherove) say you should only test code which has conditional logic (IF statements etc), which may or may not include your DAL. Some people (often those doing TDD) will say you should test everything, including the DAL, and aim for 100% code coverage.
Personally I only test it if it has logic in, so end up with some DAL methods tested and some not. Most of the time you just end up checking that your BL calls your DAL, which has some merit but I don't find necessary. I think it makes more sense to have integration tests which cover the app end-to-end, including the database, which covers things like GetUserById.
Either way, and you probably know this already, but make sure your unit tests don't touch an actual database. (No problem doing this, but that's an integration test not a unit test, as it takes a lot longer and involves complex setup, and should be run separately).
It is a good practice to write unit test for every layer, even the DAL.
I don't think running tests on the real db is a good idea, you might ruin important data. We used to set up a copy of the db for tests with just enough data in it to run tests on.
In our test project we had a special web.config file with test settings, like a ConnectionString to our test db.
In my experience it was helpful to test each layer on its own. Integrating it and test again. Integration test normally does not test all aspects. Sometimes if the Data Access Layer (I don't know nHibernate) is generated code or sort of generic code it looks like overkill. But I have seen it more than once that systematic testing pays off.
Is it redundancy? In my opinion it is not.
Is it common practice? Hard to tell. I would say no. I have seen it in some projects but not in all projects I worked in. Was often dependend on time/resources and mentality of the team / individiual developer.
Is it better to have test database, or create database data during test? This is quite a different question. Cannot be answered easily. Depends on your project. Create a new one is good but sometimes throws up unreal bugs (although bugs). It is depending on your project (product development or a proprietary development). Usually in an proprietary on site development a database gets migrated into from somewhere. So a second test is definitely needed with the migrated data. But this is rather at a system test level.
Unit testing the DAL is worth it as mentioned if there is logic in there, for example if using the same StoredProc for insert & update its worth knowing that an insert works, a subsequent call updates the previous and a select returns it and not a list. In your case SaveUser method probably inserts first time around and subsequently updates, its nice to know that this is whats being done at unit test stage.
If you're using a framework like iBatis or Hibernate where you can implement typehandlers its worth confirming that the handlers handle values in a way that's acceptable to your underlying DB.
As for testing against an actual DB if you use a framework like Spring you can avail of the supported database unit test classes with auto rollback of transactions so you run your tests and the DB is unaffected afterwards. See here for information. Others probably offer similiar support.
What is the difference between unit tests and functional tests? Can a unit test also test a function?
Unit tests tell a developer that the code is doing things right; functional tests tell a developer that the code is doing the right things.
You can read more at Unit Testing versus Functional Testing
A well explained real-life analogy of unit testing and functional testing can be described as follows,
Many times the development of a system is likened to the building of a house. While this analogy isn't quite correct, we can extend it for the purposes of understanding the difference between unit and functional tests.
Unit testing is analogous to a building inspector visiting a house's construction site. He is focused on the various internal systems of the house, the foundation, framing, electrical, plumbing, and so on. He ensures (tests) that the parts of the house will work correctly and safely, that is, meet the building code.
Functional tests in this scenario are analogous to the homeowner visiting this same construction site. He assumes that the internal systems will behave appropriately, that the building inspector is performing his task. The homeowner is focused on what it will be like to live in this house. He is concerned with how the house looks, are the various rooms a comfortable size, does the house fit the family's needs, are the windows in a good spot to catch the morning sun.
The homeowner is performing functional tests on the house. He has the user's perspective.
The building inspector is performing unit tests on the house. He has the builder's perspective.
As a summary,
Unit Tests are written from a programmers perspective. They are made to ensure that a particular method (or a unit) of a class performs a set of specific tasks.
Functional Tests are written from the user's perspective. They ensure that the system is functioning as users are expecting it to.
Unit Test - testing an individual unit, such as a method (function) in a class, with all dependencies mocked up.
Functional Test - AKA Integration Test, testing a slice of functionality in a system. This will test many methods and may interact with dependencies like Databases or Web Services.
A unit test tests an independent unit of behavior. What is a unit of behavior? It's the smallest piece of the system that can be independently unit tested. (This definition is actually circular, IOW it's really not a definition at all, but it seems to work quite well in practice, because you can sort-of understand it intuitively.)
A functional test tests an independent piece of functionality.
A unit of behavior is very small: while I absolutely dislike this stupid "one unit test per method" mantra, from a size perspective it is about right. A unit of behavior is something between a part of a method and maybe a couple of methods. At most an object, but not more than one.
A piece of functionality usually comprises many methods and cuts across several objects and often through multiple architectural layers.
A unit test would be something like: when I call the validate_country_code() function and pass it the country code 'ZZ' it should return false.
A functional test would be: when I fill out the shipping form with a country code of ZZ, I should be redirected to a help page which allows me to pick my country code out of a menu.
Unit tests are written by developers, for developers, from the developer's perspective.
Functional tests may be user facing, in which case they are written by developers together with users (or maybe with the right tools and right users even by the users themselves), for users, from the user's perspective. Or they may be developer facing (e.g. when they describe some internal piece of functionality that the user doesn't care about), in which case they are written by developers, for developers, but still from the user's perspective.
In the former case, the functional tests may also serve as acceptance tests and as an executable encoding of functional requirements or a functional specification, in the latter case, they may also serve as integration tests.
Unit tests change frequently, functional tests should never change within a major release.
TLDR:
To answer the question: Unit Testing is a subtype of Functional Testing.
There are two big groups: Functional and Non-Functional Testing. The best (non-exhaustive) illustration that I found is this one (source: www.inflectra.com):
(1) Unit Testing: testing of small snippets of code (functions/methods). It may be considered as (white-box) functional testing.
When functions are put together, you create a module = a standalone piece, possibly with a User Interface that can be tested (Module Testing). Once you have at least two separate modules, then you glue them together and then comes:
(2) Integration Testing: when you put two or more pieces of (sub)modules or (sub)systems together and see if they play nicely together.
Then you integrate the 3rd module, then the 4th and 5th in whatever order you or your team see fit, and once all the jigsaw pieces are placed together, comes
(3) System Testing: testing SW as a whole. This is pretty much "Integration testing of all pieces together".
If that's OK, then comes
(4) Acceptance Testing: did we build what the customer asked for actually? Of course, Acceptance Testing should be done throughout the lifecycle, not just at the last stage, where you realise that the customer wanted a sportscar and you built a van.
"Functional test" does not mean you are testing a function (method) in your code. It means, generally, that you are testing system functionality -- when I run foo file.txt at the command line, the lines in file.txt become reversed, perhaps. In contrast, a single unit test generally covers a single case of a single method -- length("hello") should return 5, and length("hi") should return 2.
See also IBM's take on the line between unit testing and functional testing.
According to ISTQB those two are not comparable. Functional testing is not integration testing.
Unit test is one of tests level and functional testing is type of testing.
Basically:
The function of a system (or component) is 'what it does'. This is
typically described in a requirements specification, a functional
specification, or in use cases.
while
Component testing, also known as unit, module and program testing,
searches for defects in, and verifies the functioning of software
(e.g. modules, programs, objects, classes, etc.) that are separately
testable.
According to ISTQB component/unit test can be functional or not-functional:
Component testing may include testing of functionality and specific non-functional characteristics such as resource-behavior (e.g. memory leaks), performance or robustness testing, as well as structural testing (e.g. decision coverage).
Quotes from Foundations of software testing - ISTQB certification
In Rails, the unit folder is meant to hold tests for your models, the functional folder is meant to hold tests for your controllers, and the integration folder is meant to hold tests that involve any number of controllers interacting. Fixtures are a way of organizing test data; they reside in the fixtures folder. The test_helper.rb file holds the default configuration for your tests.
u can visit this.
very simply we can say:
black box: user interface test like functional test
white box: code test like unit test
read more here.
AFAIK, unit testing is NOT functional testing. Let me explain with a small example. You want to test if the login functionality of an email web app is working or not, just as a user would. For that, your functional tests should be like this.
1- existing email, wrong password -> login page should show error "wrong password"!
2- non-existing email, any password -> login page should show error "no such email".
3- existing email, right password -> user should be taken to his inbox page.
4- no #symbol in email, right password -> login page should say "errors in form, please fix them!"
Should our functional tests check if we can login with invalid inputs ? Eg. Email has no # symbol, username has more than one dot (only one dot is permitted), .com appears before # etc. ? Generally, no ! That kind of testing goes into your unit tests.
You can check if invalid inputs are rejected inside unit tests as shown in the tests below.
class LoginInputsValidator
method validate_inputs_values(email, password)
1-If email is not like string.string#myapp.com, then throw error.
2-If email contains abusive words, then throw error.
3-If password is less than 10 chars, throw error.
Notice that the functional test 4 is actually doing what unit test 1 is doing. Sometimes, functional tests can repeat some (not all) of the testing done by unit tests, for different reasons. In our example, we use functional test 4 to check if a particular error message appears on entering invalid input. We don't want to test if all bad inputs are rejected or not. That is the job of unit tests.
The way I think of it is like this: A unit test establishes that the code does what you intended the code to do (e.g. you wanted to add parameter a and b, you in fact add them, and don't subtract them), functional tests test that all of the code works together to get a correct result, so that what you intended the code to do in fact gets the right result in the system.
UNIT TESTING
Unit testing includes testing of smallest unit of code which usually are functions or methods. Unit testing is mostly done by developer of unit/method/function, because they understand the core of a function. The main goal of the developer is to cover code by unit tests.
It has a limitation that some functions cannot be tested through unit tests. Even after the successful completion of all the unit tests; it does not guarantee correct operation of the product. The same function can be used in few parts of the system while the unit test was written only for one usage.
FUNCTIONAL TESTING
It is a type of Black Box testing where testing will be done on the functional aspects of a product without looking into the code. Functional testing is mostly done by a dedicated Software tester. It will include positive, negative and BVA techniques using un standardized data for testing the specified functionality of product. Test coverage is conducted in an improved manner by functional tests than by unit tests. It uses application GUI for testing, so it’s easier to determine what exactly a specific part of the interface is responsible for rather to determine what a code is function responsible for.
Test types
Unit testing - In Procedural programming unit is a procedure, in Object oriented programming unit is a class. Unit is isolated and reflects a developer perspective
Functional testing - more than Unit. User perspective, which describes a feature, use case, story...
Integration testing - check if all separately developed components work together. It can be other application, service, library, database, network etc.
Narrow integration test - double[About] is used. The main purpose is to check if component is configured in a right way
Broad integration test (End to End test, System test) - live version. The main purpose is to check if all components are configured in a right way
UI testing - checks if user input triggers a correct action and the UI is changed when some actions are happened
...
Non functional testing - other cases
Performance testing - calculate a speed and other metrics
Usability testing - UX
...
[iOS tests]
[Android tests]
Unit Test:-
Unit testing is particularly used to test the product component by component specially while the product is under development.
Junit and Nunit type of tools will also help you to test the product as per the Unit.
**Rather than solving the issues after the Integration it is always comfortable to get it resolved early in the development.
Functional Testing:-
As for as the Testing is concerned there are two main types of Testing as
1.Functional Test
2.Non-Functional Test.
Non-Functional Test is a test where a Tester will test that The product will perform all those quality attributes that customer doesn't mention but those quality attributes should be there.
Like:-Performance,Usability,Security,Load,Stress etc.
but in the Functional Test:- The customer is already present with his requirements and those are properly documented,The testers task is to Cross check that whether the Application Functionality is performing according to the Proposed System or not.
For that purpose Tester should test for the Implemented functionality with the proposed System.
Unit testing is usually done by developers. The objective of doing the same is to make sure their code works properly. General rule of thumb is to cover all the paths in code using unit testing.
Functional Testing: This is a good reference. Functional Testing Explanation