iOS: Code Coverage Confusion - unit-testing

Gone through the
http://www.cocoanetics.com/2013/10/xcode-coverage/
link. Being new to unit testing I would like to know how the code coverage identifies the source code being covered ?
My theoretical question:
In a model class [subclass of NSObject] containing three methods M1, M2, M3 we do create an XCTestCase subclass with three unit test methods testM1, testM2, testM3. If we are able to run all these three test methods and able to generate .gcda/.gcno [from code coverage] files.
My question is how from this code coverage one can say that the model has more than 80% coverage? is it necessary that if possible then we should write unit test for each and every method in model (s) and only then we can arrive to this conclusion that more then 80-90% code is covered. In short I would like to know the correlation between unit test methods and code coverage.

Unit test methods call your methods (unit under test) with different scenarios in mind and the intention to test all (important) code paths.
To see what is being covered the app you build has instrumented program flows and the code is compiled to generate coverage files. Using this instrumentation, the program knows during the test runs which code was actually run and how many times.

Related

Alert if not equal to the true value num_steps

I am not that good with "unittest" topic. I'd liked to create a unit test in order to say "Hey body, this is the wrong (or right) answer because blabla!". I need to place a unit test because it took 3 MF weeks to find why the prediction in the machine learning model did not work! Hence I want to avoid that type of error in the future.
Questions :
How can I ask the code to alert me when len(X) - len(pred_values) is not equal to num_step?
Do I need to create a unit test file to gather all the unit tests, e.g. unittest.py?
Do we need to place the unit tests away from the main code?
1.
The test code can alert you by means of an assertion. In your test, you can use self.assertEqual()
self.assertEqual(len(X) - len(pred_values), num_step)
2.
Yes you would normally gather your TestCase classes in a module prefixed with test_. So if the code under test lives in a module called foo.py, you would place your tests in test_foo.py.
Within test_foo.py you can create multiple TestCase classes that group related tests together.
3.
It is a good idea to separate the tests from the main code, although not mandatory. Reasons why you might want to separate the tests include (as quoted from the docs):
The test module can be run standalone from the command line.
The test code can more easily be separated from shipped code.
There is less temptation to change test code to fit the code it tests without a good reason.
Test code should be modified much less frequently than the code it tests.
Tested code can be refactored more easily.
Tests for modules written in C must be in separate modules anyway, so why not be consistent?
If the testing strategy changes, there is no need to change the source code.
Lots more info in the official docs.

Why Should I Write Unit Tests if I have UI Automation Tests [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If I have UI automation tests, why do I need to write unit tests?
If I need to check that a method returns some output for a given input, for example a result of an addition which is then displayed in a view, why do I need unit test if I can confirm that output in the view is correct (or not correct) through UI automation test
Unit test and end to end test (UI tests) have two different purposes
Unit test tell you when unit of code (module, class, function, interface) has an issue
End to end tests tell you how that failure affects end to end output.
Lets use an analogy to understand why we need both.
Suppose you were manufacturing a car by assembling different components like carburettor, gear box, tyres, crankshaft etc. All these parts are being made by different vendors(think developers).
When car fails to work as expected, will you need to test individual components to figure out where the problem originates from ?
Will testing components before assembling the car, make you save time and effort ?
Typically what you want to do is to make sure each component work as expected (unit tests) before you add them to your car.
When the car does not work as expected, you test each component to find the root cause of the problem.
This typically works by creating an assembly line (CI pipeline). Your testing strategy looks like
test individual components
test if they work when interfaced with other components
test the car once all components are assembled together.
This testing strategy is what we call a testing pyramid in programming.
Reading this might give you more insight : https://martinfowler.com/bliki/TestPyramid.html
Two reasons immediately come to mind as to why you would want unit tests despite having automation tests.
Unit tests make ruthless code refactoring a much less daunting challenge, and mitigate that risk
Unit tests provide invaluable documentation of the code, what each module does (automation tests don't give you this), when the code changes the unit tests change also unlike stale documentation in some wiki or doc that never gets updated later as code continues to change and evolve over time.
In addition to Nishants and James' answers: With UI/End-to-End tests it is much harder to test for certain error conditions.
First of all, you need to understand that unit test cases and user interface (UI) test automation are two different concepts. In unit test cases, you write test cases per unit and test them module by module---you're actually testing each module separately. 
Test automation, on the other hand, covers end-to-end testing. It tests your end-to-end inputs and their respective outputs. Both have their own advantages, so you need to use both on your product to make sure it is bug-free. Let's better understand the need for unit tests with the help of an example:
You're building a chatting app. For the app, you're integrating different modules like login, registration, send and receive a message, message history etc. Now, suppose there are multiple developers working on this product: each developer has worked on a different module. In this scenario, you need to join all the modules into the system flow to make the complete product. When you integrate all the modules, you find that the product is not able to store messages. So, now you need to test each module independently because you can't tell which specific module didn't work.
To avoid such cases, it's better to test each module before merging it with the others. This is called unit testing. If unit testing is done correctly, you will get the bug immediately. Once all the unit test cases pass, you can finally start integrating modules.
Unit testing is generally executed through the use of an assembly line (CI pipeline). Your product usually works if you create a good testing strategy and write the best test cases. The flow is a bit like this:
Test individual modules
Start integrating and testing each functionality and see if it's working or not
Run UI automation test cases on the product once you have integrated all the modules
In the end, if all test cases pass, that means your system is ready to work flawlessly.

Isolation during unit testing and duplicate code for test data

I am working on some application in Java and writing JUnit tests. I have a design question about Unit Testing. I have one class that reads a file and Create object called Song by reading different lines and parsing based on some algorithm. I have written some unit test on that. Next step after parsing is to actually convert that song to a different format based on some properties of Song object. I have another class that works as a translator. There is a method translate that takes Song object as input. Now in unit test for translator. I need a Song object with all valid properties. I am confused here that should I create a new Song object by putting same functionality as in parser or should I call the parser service to do that for me. I feel it will not be isolated if I take the second option. But in first option it's like duplicate code. Can somebody guide me on this?
There's nothing wrong in using a Builder in order to create the input data for a SUT invocation when this data is complex, however I see 2 risks here.
If the builder fails your test will fail too, but it shouldn't. As you said unit tests should be isolated from external code.
If you use code coverage as a metric to evaluate how good your unit tests are (I don't mean this is right), by looking at the builder's coverage you'll be tempted to think it's tested though obviously isn't.
My opinion is there's not a best solution fitting all the scenarios. In case the input data is not very complex try to build it "manually", otherwise use the builder.

unit test via output sanity checks

I have often seen tests where canned inputs are fed into a program, one checks the outputs generated against canned (expected) outputs usually via diff. If the diff is accepted, the code is deemed to pass the test.
Questions:
1) Is this an acceptable unit test?
2) Usually the unit test inputs are read in from the file system and are big
xml files (maybe they represent a very large system). Are unit tests supposed
to touch the file system? Or would a unit test create a small input on the fly
and feed that to the code to be tested?
3) How can one refactor existing code to be unit testable?
Output differences
If your requirement is to produce output with certain degree of accuracy, then such tests are absolutely fine. It's you who makes the final decision - "Is this output good enough, or not?".
Talking to file system
You don't want your tests to talk to file system in terms of relying on some files to exists somewhere in order for your tests to work (for example, reading values from configuration files). It's a bit different with tests input resources - you can usually embed them in your tests (or at least test project), treat them as part of codebase, and on top of that they usually should be loaded before test executes. For example, when testing rather large XMLs it's reasonable to have them stored as separete files, rather than strings in code files (which sometimes can be done instead).
Point is - you want to keep your tests isolated and repeatable. If you can achieve that with file being loaded at runtime - it's probably fine. However it's still better to have them as part of codebase/resources than standard system file lying somewhere.
Refactoring
This question is fairly broad, but to put you in the right direction - you want to introduce more solid design, decouple objects and separate responsibilities. Better design will make testing easier and, what's most important - possible. Like I said, it's broad and complex topic, with entire books dedicated to it.
1) is this an acceptable unit test?
This is not a unit test by definition. A unit test focuses on the smallest possible amount of code. Your test can still be a useful test, regression test, self-documenting test, TDD test, etc. It is just not a unit test, although it may be equally useful.
2) Are unit tests supposed to touch the file system?
Typically not, unless you need to unit test something explicitly related to the filesystem. One reason is, if you have several hundred unit tests it is nice to have them run in a couple seconds rather than minutes.
3) How can one refactor existing code to be unit testable?
A better question is why do you want the code to be unit testable? If you are trying to learn TDD it is better to start with a new project. If you have bugs then try to write tests for the bugs. If the design is slowing you down then you can refactor towards testability over time.
Addressing only the 3rd question. It is extremely difficult. You really need to write tests at the same time you write the code, or before. It is a nightmare to try to slap tests onto an existing code base, and it is often more productive to throw away the code and start over.
This is an acceptable unit test.
The files being read should be part of the test project so anyone that checks out the project from repository will have the same files at the same relative location.
Having black box tests is a great start, you can refactor the existing code and use the current tests to verify that it is still working (depending on the quality of the tests). Here is a short blog about refactoring for testability: http://www.beletsky.net/2011/02/refactoring-to-testability.html
A diff test can be acceptable as a Unit Tests, especially when your using test data that is shared between Unit Tests.
If you don't know how many items there are in the SUT you could use the following:
int itemsBeforeTest = SUT.Items.Count;
SUT.AddItem();
Assert.AreEqual(itemsBeforeTest + 1, SUT.Items.Count);
If a Unit Tests requires so much data that it needs to be read from a big XML file, it's not a real Unit Test. A Unit Test should test a class in complete isolation and mock out all dependencies.
Using a pattern like the Builder pattern, can also help in creating test data for your unit test. The biggest problem with having your test data in a separate file, is that it's hard to understand what the test does exactly. If you create your test data in the arrange part of your unit test, it's immediately clear what is important for your test.
For example, let's say you have the following arrange code to test if the price of an invoice is correct:
Address billingAddress = new Address("Stationsweg 9F",
"Groningen", "Nederland", "9726AE"); shippingAddress = new Address("Aweg 1",
"Groningen", "Nederland", "9726AB");
Customer customer = new Customer(99, "Piet", "Klaassens",
30,
billingAddress,
shippingAddress);
Product product = new Product(88, "Tafel", 19.99);
Invoice invoice = new Invoice(customer);
Can be changed to the following when using a Builder
Invoice invoice = Builder<Invoice>.CreateNew()
.With(i => i.Product = Builder<Product>.CreateNew()
.With(p => p.Price = 19.99)
.Build())
.Build();
When using a Builder its much easier to see what is important and your code is also more maintainable.
Refactoring code to become more testable is a broad topic. It comes down to thinking about 'how would I test this code?' while you are writing the code.
Take the following example:
public class AlarmClock
{
public AlarmClock()
{
SatelliteSyncService = new SatelliteSyncService();
HardwareClient = new HardwareClient();
}
}
This is hard to test. You need to make sure that both the SatteliteSyncService and the HardwareClient are functional when testing the AlarmClock.
This change tot the constructor makes it much easier to test:
public AlarmClock(IHardwareClient hardwareClient, ISatelliteSyncService satelliteSyncService)
{
SatelliteSyncService = satelliteSyncService;
HardwareClient = hardwareClient;
}
Techniques like Dependency Injection help with refactoring your code to be more testable. Also watch out for static values like DateTime.Now or the use of a Singleton because they are hard to test.
A really good introduction to writing testable code can be found here.
You should not require the code to be refactored to be able to perform unit tests. Unit tests, as the name implies, are testing a unit of code for the system. The best unit tests are small, quick to execute and exercise only a very small subset of the piece of code being tested (e.g. class).
The reason for having small, compact unit tests that only exercise one part of the code is that the objective of unit tests is to find bugs in that unit of code. If the unit test takes a long time to execute and tests lots of things it makes the identification of a bug in the code that much harder.
As to accessing the file system, I see no problem. Some unit tests may require a database to be constructed before the test is carried out, the output to be checked that would be difficult or time expensive to write the checks in code.
The files for unit testing should be treated like the rest of the code - put under version control. If you are paranoid you could implement a check within the unit test such as perform a MD5 on it and check against a hard coded value so future reruns of the test can verify that the test data has not inadvertenly changed.
Just my humble thoughts.

Unit testing , approval testing and datafiles

(Leaving aside hair-splitting about if this is integration-testing or unit-testing.)
I would rather first test at the large scale. If my app writes a VRML file that is the same as the reference one then the VRML exporter works, I don't then have to run unit tests on every single statement in the code. I would also like to use this do some level of poor-man gui testing by comparing screenshots.
Is there a unit test framework (for C++ ideally) that integrates this sort of testing - or at least makes it easy to integrate with unit tests?
edit. It seems a better term is approval testing. So are there any other unit test frameworks that incorporate Approval Testing ?
Have a look at Approval Tests, written by a couple of friends of mine. Not C++, but it's the general idea of what you're after, also known as Golden Master tests. Good stuff, whether it's unit tests or not.
Kitware, for VTK, uses CDash to do most of its testing. Many of its tests are similar in nature to this - they write out an image of the rendered model, and compare it to a reference image.
In addition, they have code in there to specifically handle very subtle differences to the reference image due to different graphics card drivers/manufacturers/etc. The tests can be written in a way to compare the reference image with some tolerance.
Okay, I think you're making an incorrect assumption about the nature of unit test code; your statement that
If my app writes a VRML file that is
the same as the reference one then the
VRML exporter works, I don't then have
to run unit tests on every single
statement in the code.
is strictly correct if you're looking to do a validation test on your code, but note that this type of test is strictly different than what a unit test actually is. Unit tests are for testing individual units of code; they do not exist for verification purposes. Depending on your environment, you may not need unit tests at all, but please keep in mind that validation tests (testing the validity of the overall program output) and unit tests (testing that the individual code units work as expected) are completely different things.
(Note that I'm really not trying to be nitpicky about this; also, you can use plenty of unit test frameworks to achieve this result; keep in mind, though, that what you're writing aren't really "Unit Tests", despite running them in a Unit Test framework.)