Google Test develop several Test Suites - c++

I would like to develop several test suites with Google Test framework.
All these test suites should share same test fixture.
I would like to have une .cpp file per test suite. One of the cpp file, contains the fixture and the test suite number 1.
Could you hint at the right architecture to do so?

I create une Test Suite per cpp file, with one Fixture per Test Suite. The code duplication is minimal.
A test fixture is aimed at having common code for several tests which operate on same data, with same context... One fixture per test suite seems relevant, if one test suite is characterized by a given context.
If you find yourself writing two or more tests that operate on similar
data, you can use a test fixture. It allows you to reuse the same
configuration of objects for several different tests.

Related

How does one write an automated integration test?

I read a lot about unit testing and integration testing, and the unit testing part I understand fairly well - isolate the object under test, and mock out dependencies using interfaces and inject those. (Or use seams to inject testable behavior).
However something that is still a mystery to me even after searching is integration testing. Every blog and link talks about testing various components together, do's and dont's, CI Servers, etc, but there's not much explanation in the way of how this is to be done.
Is an integration test automated? Or is this a manual effort? If it is automated, do I write this as code in the native language my app is in? How do I check or verify if the integration test is working as expected?
For example, if I have 4 services (a Socket Client, a Socket Server, a Database, and a Web Application) and I want to do some integration testing on how these 4 services interact with each other. How exactly how would I approach this? I know that some dummy data will be involved, but which part of my integration test is checking if the systems are working together correctly? This part is really unclear to me.
As you said about unit testing, you need to isolate the object under test, and mock out dependencies using interfaces and inject those...
Integration test is without isolating the unit... You can test two part or more (but you shouldn't have extensive integration test suits neither big test scenario). An example of integration test is testing your code with the database, you could need for that to initiate the database and clean it to have a repeatable tests.
You could have as well interface test (client execution and integration with the server for example) and so on.
Don't forget that this tests come with a downside, they are slower and harder to maintain, but they have different purpose than unit test, which is checking that units works as expected with real dependencies.
So to wrap-up if you write a test without totally isolating the unit it is an integration test, due to its nature it's better to test the logic of your code by unit test and reserve fewer integration tests when needed to test the interaction between the units.
You can also check this nice introduction https://softwareengineering.stackexchange.com/questions/48237/what-is-an-integration-test-exactly
An interesting definition from "the pragmatic programmer": integration tests show that the major parts of a system work well together

Running a particular TestFixture class using NUnit tool

I have implemented unit tests for my MVC Application using the NUnit Framework.
The Unit testing project contains multiple [TestFixture] classes that need to be tested. I'm using a TestFixture for each Module in my MVC project, but all of the modules are inside in single Namespace.
I would like to be able to test a single Module through its [TestFixture] using the Unit Tool in FinalBuilder (automation) or Manually, instead of testing the whole unit test project.
My total test case count is around 2000, made up of 20 modules each with about 100 test cases. Any changes made to one of modules or it's TestFixture will mean that only that modules tests need to be run. This will minimise the amount of time that needs to be taken to wait for unrelated tests to complete.
As far as manual testing goes, in the Nunit gui, you can simply select a test fixture and run it to. I haven't used finalbuilder, but assuming it's similar to the nunit-console application, you have a couple of options.
You can pass arguments to specify particular test cases, for example:
nunit-console some.dll /run=SomeTestFixture
Or, you can mark up your test fixtures with categories and then tell the test runner to only include tests in those categories, so in your code:
[TestFixture]
[Category("SomeTestCategory")]
public class SomeTestClass {
Then in your nunit call:
nunit-console some.dll /include=SomeTestCategory
That said, assuming that your tests are actually unit tests, 2000 doesn't seem like that many and they shouldn't really be taking all that long to run...

Unit Test? Integration Test? Regression Test? Acceptance Test?

Is there anyone that can clearly define these levels of testing as I find it difficult to differentiate when doing TDD or unit testing. Please if anyone can elaborate how, when to implement these?
Briefly:
Unit testing - You unit test each individual piece of code. Think each file or class.
Integration testing - When putting several units together that interact you need to conduct Integration testing to make sure that integrating these units together has not introduced any errors.
Regression testing - after integrating (and maybe fixing) you should run your unit tests again. This is regression testing to ensure that further changes have not broken any units that were already tested. The unit testing you already did has produced the unit tests that can be run again and again for regression testing.
Acceptance tests - when a user/customer/business receive the functionality they (or your test department) will conduct Acceptance tests to ensure that the functionality meets their requirements.
You might also like to investigate white box and black box testing. There are also performance and load testing, and testing of the "'ilities" to consider.
Unit test: when it fails, it tells you what piece of your code needs to be fixed.
Integration test: when it fails, it tells you that the pieces of your application are not working together as expected.
Acceptance test: when it fails, it tells you that the application is not doing what the customer expects it to do.
Regression test: when it fails, it tells you that the application no longer behaves the way it used to.
Here's a simple explanation for each of the mentioned tests and when they are applicable:
Unit Test
A unit test is performed on a self-contained unit (usually a class or method) and should be performed whenever a unit has been implemented or updating of a unit has been completed.
This means it's run whenever you've written a class/method, fixed a bug, changed functionality...
Integration Test
Integration test aims to test how well several units interact with each other. This type of test should be performed Whenever a new form of communications has been established between units or the nature of their interaction have changed.
This means it's run whenever a recently written unit is integrated into the rest of the system or whenever a unit which is interacts with other systems has been updated (and successfully completed its unit tests).
Regression Test
Regression tests are performed whenever anything has been changed in the system, in order to check that no new bugs have been introduced.
This means it's run after all patches, upgrades, bug fixes. Regression testing can be seen as a special case of combined unit test and integration test.
Acceptance Test
Acceptance tests are performed whenever it is relevant to check that a subsystem (possibly the entire system) fulfils its entire specifications.
This means it's mainly run before finishing a new deliverable or announcing completion of a larger task. See this as your final check to see that you've really completed your goals before running to the client/boss and announcing victory.
This is at least the way I learned, though I'm sure there are other opposing views. Either way, I hope that helps.
I'll try:
Unit test: a developer would write one to test an individual component or class.
Integration test: a more extensive test that would involve several components or packages that need to collaborate
Regression test: Making a single change to an application forces you to re-run ALL the tests and check out ALL the functionality.
Acceptance test: End users or QA do these prior to signing off to accept delivery of an application. It says "The app met my requirements."
Unit Test: is my single method working correctly? (NO dependencies, or dependencies mocked)
Integration Test: are my two separately developed modules working corectly when put together?
Regression Test: Did I break anything by changing/writing new code? (running unit/integration tests with every commit is technically (automated) regression testing). More often used in context of QA - manual or automated.
Acceptance Test: testing done by client, that he "accepts" the delivered SW
Can't comment (reputation to low :-| ) so...
#Andrejs makes a good point around differences between environments associated with each type of testing.
Unit tests are run typically on developers machine (and possibly during CI build) with mocked out dependencies to other resources/systems.
Integration tests by definition have to have (some degree) of availability of dependencies; the other resources and systems being called so the environment is more representative. Data for testing may be mocked or a small obfuscated subset of real production data.
UAT/Acceptance testing has to represent the real world experience to the QA and business teams accepting the software. So needs full integration and realistic data volumes and full masked/obfuscated data sets to deliver realistic performance and end user experience.
Other "ilities" are also likely to need the environment to be as close as possible to reality to simulate the production experience e.g. performance testing, security, ...

When to Unit Test and When to Integration Test

I am new to testing in general and am working on a Grails application.
I want to write a test that says "when this action is called, the correct view is returned". I don't know how to go about deciding if I should make something like this a unit test or an integration test. Either test would show me what I want - how do I decide?
One problem with integration tests is their speed. For me, integration tests take 15+ seconds to start up. In that time, certain things do slip out of mind focus.
I prefer to go with unit tests that start in no more then 2 sec and can be run several times in those 15 seconds. Especially with mockDomain(). Especially with Grails 2.0 implementing criteria and named queries in unit tests.
One more argument for unit tests is they force you to decouple your code. Integration tests always tempt you to just rely on some other component existing and initialized.
From Grails Docs section 9.1
Unit testing are tests at the "unit" level. In other words you are
testing individual methods or blocks of code without considering for
surrounding infrastructure. In Grails you need to be particularity
aware of the difference between unit and integration tests because in
unit tests Grails does not inject any of the dynamic methods present
during integration tests and at runtime.
From Grails Docs section 9.2
Integration tests differ from unit tests in that you have full access
to the Grails environment within the test. Grails will use an
in-memory HSQLDB database for integration tests and clear out all the
data from the database in between each test.
What this means is that a unit test is completely isolated from the Grails environment whereas an integration test is not. According to Scott Davis, author of this article, it is acceptable to write only integration tests...
Unit vs. integration tests
As I mentioned earlier, Grails supports two basic types of tests: unit
and integration. There's no syntactical difference between the two —
both are written as a GroovyTestCase using the same assertions. The
difference is the semantics. A unit test is meant to test the class in
isolation, whereas the integration test allows you to test the class
in a full, running environment.
Quite frankly, if you want to write all of your Grails tests as
integration tests, that's just fine with me. All of the Grails
create-* commands generate corresponding integration tests, so most
folks simply use what is already there. As you'll see in just a
moment, most of the things you want to test require the full
environment to be up and running anyway, so integration tests are a
pretty good default. If you have noncore Grails classes that you'd
like to test, unit tests are perfectly fine.
First go through this chapter of the grails guide http://grails.org/doc/latest/guide/9.%20Testing.html
It talks about testing controllers and ability to get controller response like so :
controller.response.contentAsString
Now deciding on which test is more of an art rather than science. I prefer unit tests cause they are faster to run :)
Its a really interesting and challenging question to answer, but the truth is it really depends on what exactly you are testing.
Take the following test: "saving a book to the database". The hints are in the description. We are saying we need a book and we need a database, so in this case a unit test wont do because we need the integrated database.
My advice is write the full test description down and break it down like I did above. It will give you the hints to help you decide.
This is made easier with spock where you can use strings for test names.

How do I integration testing to my code written using the ASP.NET Provider Model?

I'm fairly new to unit testing. I have a site built in the 3-tier architecture, UI -> BLL -> DAL. And I used the asp.net Provider Model, so by making changes to web.config I can switch to DAL dll to target different data sources, to be complete the DAL is written with Entity Framework.
Now, my question is how do I unit test my BLL? I use NUnit.
If run/debug my site, the asp.net/IIS loads everything and gets the correct configuration from web.config, so things work, that is because the entry point is from IIS. Now if I use NUnit gui to test and say I have my test project "MySite.Test.dll" which have test cases to my BLL, how does the testing framework get the correct configuration to successfully run all the test. It needs the info in web.config to load the correct provider!
Now, in my DAL there is a App.config created there by EntityFramework, and in it there is just the connectionString. Should I put all the provider related configuration in this app.config? Or am I missing some big picture on how to correctly do this?
This should be a common thing I imagine people need to do constantly. Could someone give some detail on how do I unit test my lib.
Thank you,
Ray.
Edit: After reading the first 2 answers I think I should correct my description with integration testing. Basically instead of IIS as the entry point, use GUI tools like the NUnit to run and test my code through, so NUnit -> BLL -> DAL. How do people actually set it up?
Thanks,
Ray.
Looks like what you are trying to do - is integration testing...
Unit testing, by definition, testing Plain Old .Net Classes in isolation. No database, no configuration...so...as I see it, to do proper unit testing, you need to refactor your BLL to service layer and domain logic classes which you will test separately. Like: Service layer uses domain logic classes, and your unit test uses them. So, domain classes do not go to database, and you do not need connection strings and everything.
However, if you want to do proper integration testing with database, you might want to do that too. If this is what you need - google it, it's not difficult to get some configuration strings in nunit.config or something...I don't know the details.
However, I feel what you want to do is unit testing and not integration testing..
Ask yourself, WHAT EXACTLY DO I WANT TO TEST?
Unit testing does not test "everything". Refactor, invert dependencies, and try to test you business logic in isolation.
Instead of a "web.config" file in your unit test project, you'll need a "MySite.Test.dll.config" file instead where you can enter the correct configuration for the testing. Note, using this method you could use a different provider to connect to an in-memory database instead if you wanted.
You have a couple of different options depending on how isolated you want to be from the DAL. If you want to involve the DAL in your tests, then you can copy the connection string section of the app.config file to an app.config file in your unit test project. As #badbadboy says, though, this really is integration testing, not unit testing.
If you want to do proper unit testing you probably want to use dependency injection and interfaces to allow you to mock out the DAL from your BLL. I use LINQ-to-SQL so what I do is create an interface and wrapper class around the DataContext. This allows me to create a mock database to use for unit testing to test my entity classes in isolation.
Testing your entity classes in isolation will make your tests less brittle and allow you to set up the data for each test independently. This makes it much easier to maintain.
You also might want to look into a mocking framework that will make it almost trivial to generate the mock objects. I've had pretty good success with Rhino Mocks, but there are others.