how to run specific xunit test cases depending upon the app.config key value - unit-testing

We have to use an app.config key in our unit test cases, but the value of that key is supposed to be different in the test cases. So depending on that values how to run specific cases? If one unit test case runs then other test cases should not run how to manage this programmatically in c#?
Thank you

One approach you can follow is to wrap the use of your configurations in an interface.
public interface IConfigurationWrapper
{
string GetConfigA();
string GetConfigB();
}
public class ConfigurationWrapper : IConfigurationWrapper
{
public ConfigurationWrapper ()
{
return ConfigurationManager.AppSettings["countoffiles"]
}
}
And then on your tests mock it with NSubsititute or any other mocking library:
var configurations = Substitute.For<IConfigurationWrapper>();
configurations.GetConfigA().Returns("Config A for testing!")
Them you can use this mock(s) to run parameterized tests with xUnit.
This project of mine, has an implementation of this wrapper pattern for testing.

Related

What test to write?

I know this is a basic question but due to lack of my knowledge I couldn't start writing test for my application. So here I am trying to learn and understand what to test and how to test based on my application scenario.
Class MyController {
MyService service = new MyService();
List<String> myList = service.generateSomeList(keyVal);
model.add("myList", myList);
}
Class MyService {
ThirdPatryService extService = new ThirdPatryService ();
MyDao dao = new MyDao();
public List<String> generateSomeList(Long key) {
List<String> resultList = dao.fetchMyList(key);
extService.processData(resultList); //using 3rd party service to process result. It doesn't return anything.
return formattedList(resultList);
}
private List<String> formattedList(List<String> listToProcess) {
//do some formatting
}
}
Class MyDao {
List<String> fetchMyList(key) {
//Use DB connection to run sql and return result
}
}
I want to do both unit testing and integration testing. So some of my questions are:
Do I have to do unit testing for MyDao? I don't think so since I can test query result by testing service level.
What can be the possible test cases for service level? I can think of test result from db and test formatting function. Any other test that I missed?
While testing generateSomeList() method is that OK to create dummy String list and test it against result? Like code below Am I creating list myself and testing myself. IS this proper/correct way to write test?
#Test
public void generateSomeListTest() {
//Step 1: Create dummy string list e.g. dummyList =["test1", "test2", "test3"]
//Step 2: when(mydao.fetchMyList(anyLong()).thenReturn(dummyList);
//Step 4: result=service.generateSomelist(123L);
//Step 5: assertEquals(result[i], dummyList[i]);
}
I don't think I have to test third party service but I think I have to make sure it is being called. Is this correct? If yes how can I do that with Mockito?
How to make sure thirdparty service has really done the processing of my data. Since its return type is void how can I do test it really done its job e.g like send email
Do I have to write test for controller or I can just write integration test.
I really appreciate if you could answer these question to understand the testing part for the application.
Thanks.
They're not really unit tests, but yes, you should test your DAOs. One of the main points in using DAOs is precisely that they're relatively easy to test (you store some test data in the database, then call a DAO method which executes a query, and check that the method returns what it should return), and that they make the service layer easy to test by mocking the DAOs. You should definitely not use the real DAOs when testing the services. Ue mock DAOs. That willmake the service tests much simpler, and much much faster.
Testing the results from DB is the job of the DAO test. The service test should use a mock DAO that returns hard-coded results, and checks that the service foes what it should do with these hard-coded results (formatting, in this case)
Yes, it's fine.
Usually, it's sufficient to stub the dependencies. Verifying that they have been called is often redundant. In that case, it could be a good idea since the third party service doesn't return anything. But that's a code smell. Why doesn't it return something? See the method verify() in Mockito. It's the very first point in the Mockito documentation: http://docs.mockito.googlecode.com/hg/latest/org/mockito/Mockito.html#1
The third party service is supposed to have been tested separately and thus be reliable. So you're supposed to assume that it does what its documentation says it does. The test for A which uses B is not supposed to test B. Only A. The test of B tests B.
A unit test is usually simpler to write and faster to execute. It's also easier to test corner cases with unit tests. Integration tests should be more coarse-grained.
Just a note: what your code severely lacks as is is dependency injection. That's what will make your code testable. It's very hard to test as is because the controller creates its service, which creates its DAO. Instead, the controller should be injected with the service, and the service should be injected with the DAO. That's what allows injecting a mock DAO in the service to test the service in isolation, and to inject a mock service into the controller to test the controller in isolation.

Visual Studio 2013: Creating ordered tests

Can someone suggest a way to run tests in a specific order in Visual Studio 2013 Express?
Is there a way to create a playlist for the tests, which also defines the order in which to run them?
By the way: These are functional tests, using Selenium, written as unit tests in C#/Visual Studio. Not actual unit tests. Sometimes a regression test suite is so big it takes a while to run through all the tests. In these cases, I've often seen the need to run the test in a prioritized order. Or, there can be cases where it's difficult to run some tests without some other tests having been run before. In this regard, it's a bit more complicated than straight unit tests (which is the reason why it's normally done by test professionals, while unit tests are done by developers).
I've organised the tests in classes with related test methods. Ex.: All login tests are in a class called LoginTests, etc.
Class LoginTests:
- AdminCanLogin (...)
- UserCanLogin (...)
- IncorrectLoginFails (...)
- ...
CreatePostTests
- CanCreateEmptyPost (...)
- CanCreateBasicPost (...)
...
These classes are unit test classes, in their own project. They in turn calls classes and methods in a class library that uses Selenium.
MS suggests creating an "Ordered Unit Test" project. However, this is not available in the Express edition.
To address your playlist request directly see MS article: http://msdn.microsoft.com/en-us/library/hh270865.aspx Resharper also has a nice test play list tool as well.
Here is an article on how to setup Ordered Tests but you cannot use this feature with Express as it requires Visual Studio Ultimate, Visual Studio Premium, Visual Studio Test Professional. http://msdn.microsoft.com/en-us/library/ms182631.aspx
If you need them ordered then they are more then likely integration tests. I am assuming you would like them ordered so you can either prepare data for the test or tear data back down after the test.
There are several ways to accommodate this requirement if it is the case. Using MSTest there are 4 attributes for this you can see more details of when they are executed here http://blogs.msdn.com/b/nnaderi/archive/2007/02/17/explaining-execution-order.aspx.
My other suggestion would be to have a helper class to preform the tasks(not tests) you are looking to have done in order, to be clear this class would not be a test class just a normal class with common functionality that would be called from within your tests.
If you need a test to create a product so another test can use that product and test that it can be added to a shopping cart then I would create a "SetupProduct" method that would do this for you as I am sure you would be testing various things that would require a product. This would prevent you from having test dependencies.
With that said, integration tests are good to verify end to end processes but where possible and applicable it might be easier to mock some or all dependencies such as your repositories. I use the Moq framework and find it really easy to work with.
This code is from the blog post linked above, I am placing it here in case the link ever dies.
Here is an example of a test class using the setup / tear down attributes to help with your tests.
[TestClass]
public class VSTSClass1
{
private TestContext testContextInstance;
public TestContext TestContext
{
get
{
return testContextInstance;
}
set
{
testContextInstance = value;
}
}
[ClassInitialize]
public static void ClassSetup(TestContext a)
{
Console.WriteLine("Class Setup");
}
[TestInitialize]
public void TestInit()
{
Console.WriteLine("Test Init");
}
[TestMethod]
public void Test1()
{
Console.WriteLine("Test1");
}
[TestMethod]
public void Test2()
{
Console.WriteLine("Test2");
}
[TestMethod]
public void Test3()
{
Console.WriteLine("Test3");
}
[TestCleanup]
public void TestCleanUp()
{
Console.WriteLine("TestCleanUp");
}
[ClassCleanup]
public static void ClassCleanUp()
{
Console.WriteLine("ClassCleanUp");
}
}
Here is the order that the methods were fired.
Class Setup
Test Init
Test1
TestCleanUp
Test Init
Test2
TestCleanUp
Test Init
Test3
TestCleanUp
ClassCleanUp
If you give more information on what you are trying to accomplish I would be happy to assist you in when to use which attribute or when to use the help class, note the helper class is NOT a test class just a standard class that has methods you can utilize to do common tasks that may be needed for multiple tests.

How can I unit test a method with database access?

I have already had some difficulties on unit tests and I am trying to learn it while using a small project I currently am working on and I ran into this problem these two problems that I hope you can help me with
1- My project is an MVC project. At which level should my unit tests start? They should focus only on the business layer? Should they also test actions on my controllers?
2- I have a method that verifies an username format and then access the DB to check if it is available for use. The return is a boolean whether this username is available or not.
Would one create a unit test for such a method?
I would be interested on testing the format verification, but how would I check them without querying the DB? Also, if the formats are correct, but the username is already in use, I will get a false value, but the validation worked. I could decouple this method, but the DB verification should only happen if the format is correct, so they should somehow be tied.
How would someone with unit tests knowledge solve this issue. Or how would someone refactor this method to be able to test it?
I could create a stub for the DB access, but how would I attach it to my project on the user testing but detach it when running locally?
Thanks!
In your specific case, one easy thing you could do is decompose your verification method into 3 different methods: one to check formatting, one to check DB availability, and one to tie them both together. This would allow you to test each of the sub-functions in isolation.
In more complex scenarios, other techniques may be useful. In essence, this is where dependency injection and inversion of control come in handy (unfortunately, those phrases mean different things to different people, but getting the basic ideas is usually a good start).
Your goal should be to decouple the concept of "Check if this username is available" from the implementation of checking the DB for it.
So, instead of this:
public class Validation
{
public bool CheckUsername(string username)
{
bool isFormatValid = IsFormatValid(username);
return isFormatValid && DB.CheckUsernameAvailability(username);
}
}
You could do something like this:
public class Validation
{
public bool CheckUsername(string username,
IUsernameAvailabilityChecker checker)
{
bool isFormatValid = IsFormatValid(username);
return isFormatValid && checker.CheckUsernameAvailability(username);
}
}
And then, from your unit test code, you can create a custom IUsernameAvailabilityChecker which does whatever you want for testing purposes. On the other hand, the actual production code can use a different implementation of IUsernameAvailabilityChecker to actually query the database.
Keep in mind that there are many, many techniques to solve this kind of testing problem, and the examples I gave are simple and contrived.
Testing against outside services can be done using mocking. If you've done a good job using interfaces it's very easy to mock various parts of your application. These mocks can be injected into your unit and used like it would normally handle it.
You should start unit testing as soon as possible. If your application is not complete or code needed for testing is absent you can still test against some interface you can mock.
On a sidenote: Unit testing is about testing behavior and is not an effective way to find bugs. You will find bugs with testing but it should not be your goal.
For instance:
interface UserService {
public void setUserRepository(UserRepository userRepository);
public boolean isUsernameAvailable(String username);
}
class MyUserService implements UserService {
private UserRepository userRepository;
public vois setUserRepository(UserRepository userRepository) {
this.userRepository = userRepository;
}
public boolean isUsernameAvailable(String username) {
return userRepository.checkUsernameAvailability(username);
}
}
interface UserRepository {
public boolean checkUsernameAvailability(String username);
}
// The mock used for testing
class MockUserRepository {
public boolean checkUsernameAvailability(String username) {
if ("john".equals(username)) {
return false;
}
return true;
}
}
class MyUnitTest {
public void testIfUserNotAvailable() {
UserService service = new UserService();
service.setUserRepository(new MockUserRepository);
assertFalse(service.isUsernameAvailable('john')); // yep, it's in use
}
public void testIfUserAvailable() {
UserService service = new UserService();
service.setUserRepository(new MockUserRepository);
assertTrue(service.isUsernameAvailable('mary')); // yep, is available
}
}

Unit Testing - Defining the scope of the test

I am trying to figure out the right way to break up my unit tests.
Given the following two classes, one is a CategoryService and the other is a CategoryValidator using FluentValidation, how would you write these tests?
I have tried writing a test for the service and one for the validator, but how do I test that the validation works in the service? Or is that out of scope of the test for the service and should be covered in the validator test?
In the AddCategory method, I am testing that the category name does not already exist in the validator. How do I test that in a unit test? Or is that an integration test?
Category Service
public class CategoryService : ValidatingServiceBase, ICategoryService
{
private readonly IUnitOfWork unitOfWork;
private readonly IRepository<Category> categoryRepository;
private readonly IRepository<SubCategory> subCategoryRepository;
private readonly IValidationService validationService;
public CategoryService(
IUnitOfWork unitOfWork,
IRepository<Category> categoryRepository,
IRepository<SubCategory> subCategoryRepository,
IValidationService validationService)
: base(validationService)
{
this.unitOfWork = unitOfWork;
this.categoryRepository = categoryRepository;
this.subCategoryRepository = subCategoryRepository;
this.validationService = validationService;
}
public bool AddCategory(Category category)
{
var validationResult = validationService.Validate(category);
if (!validationResult.IsValid)
return false;
categoryRepository.Add(category);
return true;
}
}
Category Validator
public class CategoryValidator : AbstractValidator<Category>
{
public CategoryValidator(ICategoryService service)
{
RuleFor(x => x.Name)
.NotEmpty()
.Must((category, name) =>
{
return service.GetCategories().SingleOrDefault(x => x.Name == name) == null;
});
}
}
Some people might find this heresy but I prefer to have things under test even if the test is bordering on that of a integration rather than unit. You just need to make sure your checks are comprehensive.
Don't worry about how the tests interact between classes, if you are starting out in unit testing (as I suspect you are) then it is better to write tests and the final D of TDD will come naturally.
I think you should test validator and service separately. Say for service test you can mock validator to return specific results without polluting your test logic with too many details (pseudo-code)
test Should_Add_Category_Only_If_It_Is_Valid() {
//given
Category category = GivenACategory();
GivenValidatorAcceptsCategory(category);
//when
Bool result = service.AddCategory(category);
//then
AssertTrue(result);
VerifyServiceHasCategory(category);
}
test Should_Reject_Invalid_Category() {
//given
Category category = GivenACategory();
GivenValidatorRejectsCategory(category);
//when
Bool result = service.AddCategory(category);
//then
AssertFalse(result);
VerifyServiceDoesNotHaveCategory(category);
}
What you're addressing is beyond the definition of unit tests. Unit tests test units ;) So write unit test for the service with stubbed validator, and for the validator with stubbed service. Then write some integration tests and these will validate whether your components interact properly.
However, it's always a matter of how you define a component (a unit) in your system. If you expose the functionality of the service and the validator together as a package, then write unit tests for them together, treating them as one unit, and don't bother thinking how they work inside - expect just correct results of particular "API" (in the meaning - exposed interface) calls.
You can combine these two approaches, and test service and the validator as one unit, but being aware of it's internals using the White-box testing approach. However, this is discouraged for large components, and I think it's obvious why.
Finally - do all these names matter? They are just names, can't I have just tests? IMHO, it is important to denote the boundary between unit tests - which should test absolutely everything (meaning "every unit" - see above). This gives you flexibility and ability to refactor and restructure the code any time you want, without the risk of changing the external behavior of the units. The second group are additional tests (like integration, smoke, white-box, regression tests etc.) - because they test larger parts of the system, they are substantially less numerous. They are important, because unit tests can't verify all interactions between the components, but they test only some of possible scenarios.

Using IOC To Configure Unit and Integration Tests

I have a unit test project which uses Ninject to mock the database repositories. I would like to use these same tests as integration tests and use Ninject to bind my real database repositories back into their respective implementations so as to test/stress the application into the DB.
Is there a way to do this with Visual Studio 2012 or is there another test framework, other than MSTest, which allows for this type of configuration?
I would really hate to rewrite/copy these unit tests into an integration test project but I suspect I could copy the files in as links and have a single test file compiled into two projects (Unit and Integration).
Thanks
Todd
Your requirements sound really odd to me. The difference between a unit test and an integration test is much bigger than just connecting to a database or not. An integration test either has a much bigger scope, or tests if components communicate correctly. When you write a unit test, the scope of such a unit is normally small (one class/component with all dependencies mocked out), which means there is no need for using a DI container.
Let me put it differently. When the tests are exactly the same, why are you interested to do the same test with and without the database. Just leave the database in and just test that. Besides these tests, you can add 'real' unit tests, that have a much smaller scope.
With Nunit you can do this with TestCase,
say you need to use the unit and unit/integration test using CustomerRepository and OrderRepository,
[TestFixture]
public class TestCustomerRepository
{
IKernel _unit;
Ikernel _integration;
[SetUp]
public void Setup()
{
//setup both kernels
}
[TestCase("Unit")]
[TestCase("Integration")]
public void DoTest(String type)
{
var custRepo = GetRepo<ICustomerRepository>(type);
var orderRepo = GetRepo<IOrderRepository>(type);
//do the test here
}
protected T GetRepo<T>(String type)
{
if (type.Equals("Unit"))
{
return _unit.Get<T>();
}
return _integration.Get<T>();
}
}
This is the basic idea.