I have an ASP.NET page that is the base to many more derived pages and it contains the following protected method
protected Change SetupApproval(string changeDescription)
{
Change change = Change.GetInstance();
change.Description = changeDescription;
change.DateOfChange = DateTime.Now;
change.MadeBy = Common.ActiveDirectory.GetUsersFullName(AccessCheck.CurrentUser());
change.Page = PageName;
return change;
}
I want to write the following Unit test
[TestMethod]
public void SetupApproval_SubmitChange_ValidateDescription()
{
var page = new DerivedFromBaseClass();
var messageToTest = "This is a test description";
var change = (page as InternalAppsPage).SetupApproval(messageToTest);
Assert.IsTrue(messageToTest == change.Description);
}
I'm sure there's a lot wrong with this code (so feel free to suggest corrections), but my main goal is to start implementing some tests for this entire project. I decided to start small - one method at a time. I first tried creating a new Test Project, but then I can't access the SetupApproval method because it is protected. My next attempt was to put the TestMethod inside of the base page, but then there's no way to Run Tests.
Lastly, I'm using Visual Studio 2008's default test framework.
Two options:
Assuming DerivedFromBaseClass is a class which solely exists for testing, just give it a new public (or internal) method which just calls SetupApproval and returns the return value.
Make the method protected internal instead, and make sure you have InternalsVisibleTo set up for your test assembly so it has access to internal methods. Document that this is for the sake of testing.
The first option is cleaner, the second is simpler, particularly if you have a lot of protected methods and the base class is non-abstract.
Related
I have an extension method which is used to read a particular claim from the current ClaimsPrincipal. But I also need to check this value against a list of items which I have in the appsettings.json.
I had this working by using a ConfigurationBuilder() to read the appsettings directly in much the same way as the startup does, although instead of using
.SetBasePath(Directory.GetCurrentDirectory())
as I do in the startup, I was using
.SetBasePath(Path.GetDirectoryName(Assembly.GetEntryAssembly().Location))
Which although isn't pretty, works fine.
However, when the Unit tests are run none of the following get me to where the appsettings are
Directory.GetCurrentDirectory()
Path.GetDirectoryName(Assembly.GetEntryAssembly().Location)
Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)
and I cannot see a way of getting the IHostingEnvironment or something similar into the extension method to read out the appsettings, or indeed to ditch the ConfigurationBuilder() and get at IOptions in the extension method, in such a way that the unit test and the running code will work correctly.
I assume there must be a way of doing this? Although I expect that I should simply not be trying at all and lift the check against the list of items into another class entirely...
Putting business logic that may ever require dependencies into static methods is not recommended. This makes it difficult to inject dependencies into them. Options are few:
Redesign the static method into a service so dependencies can be injected through the constructor. (Recommended)
public class Foo : IFoo
{
private readonly IOptions<FooOptions> optionsAccessor;
public Foo(IOptions<FooOptions> optionsAccessor)
{
this.optionsAccessor = optionsAccessor ??
throw new ArgumentNullException(nameof(optionsAccessor));
}
public void DoSomething()
{
var x = this.optionsAccessor;
// Same implementation as your static method
}
}
Inject the dependencies as parameters of the extension method.
public static void DoSomething(this object o, IOptions<FooOptions> optionsAccessor)
{
// Implementation
}
Redesign the static method to be a facade over an abstract factory like this example.
I am trying to write a Unit Tests to a legacy code using Mockito.
But I am not able to understand how do I mock it. Can some please help.
The real problem I am facing is actually I am not able to decide how to make a decision on what exactly is to be mocked? Below is the code. I have looked at numerous videos on YouTube and read many Mockito Tutorials but all of them seem to be guiding mostly about how to use the Mockito Framework.
The basic idea of what to Mock is still unclear. Please guide if you have a better source. I do understand that the code showed below does not really showcase the best coding practice.
public class DataFacade {
public boolean checkUserPresent(String userId){
return getSomeDao.checkUserPresent(userId);
}
private SomeDao getSomeDao() {
DataSource dataSource = MyDataSourceFactory.getMySQLDataSource();
SomeDao someDao = new SomeDao(dataSource);
}
}
Well, a Unittest, as the name implies, tests a unit. You should mock anything that isn't part of that unit, especially external dependencies. For example, a DAO is normally a good example for something that will be mocked in tests where the class under tests uses it, because otherwise you would really have actual data access in your test, making it slower and more prone to failure because of external reasons (for example, if your dao connects to a Datasource, that Datasource's target (for example, the database) may be down, failing your test even if the unit you wanted to test is actually perfectly fine). Mocking the DAO allows you to test things independently.
Of course, your code is bad. Why? You are creating everything in your method by calling some static factory method. I suggest instead using dependency injection to inject the DAO into your facade, for example...
public DataFacade(SomeDao someDao) {
this.someDao = someDao;
}
This way, when instantiating your DataFacade, you can give it a dao, which means, in your test you can give it a mock, for example...
#Test
public void testSomething() {
SomeDao someDaoMock = Mockito.mock(SomeDao.class);
DataFacade toTest = new DataFacade(someDaoMock);
...now you can prepare your mock to do something and then call the DataFace method
}
Dependency injection frameworks like Spring, Google Guice, etc. can make this even easier to manage, but the first step is to stop your classes from creating their own dependencies, but let the dependencies be given to them from the outside, which makes the whole thing a lot better.
You should "mock" the inner objects that you use in your methods.
For example if you write unit tests for DataFacade->checkUserPresent, you should mock the getSomeDao field.
You have a lot of ways to do it, but basically you can make getSomeDao to be public field, or get it from the constructor. In your test class, override this field with mocked object.
After you invoke DataFacade->checkUserPresent method, assert that checkUserPresent() is called.
For exmaple if you have this class:
public class StudentsStore
{
private DbReader _db;
public StudentsStore(DbReader db)
{
_db = db;
}
public bool HasStudents()
{
var studentsCount = _db.GetStudentsCount();
if (studentsCount > 0)
return true;
else
return false;
}
}
And in your test method:
var mockedDb = mock(DbReader.class);
when(mockedDb.GetStudentsCount()).thenReturn(1);
var store = new StudentsSture(mockedDb);
assertEquals(true,store.HasStudents());
I'm mocking my repository correctly, but in cases like show() it either returns null so the view ends up crashing the test because of calling property on null object.
I'm guessing I'm supposed to mock the eloquent model returned but I find 2 issues:
What's the point of implementing repository pattern if I'm gonna end up mocking eloquent model anyway
How do you mock them correctly? The code below gives me an error.
$this->mockRepository->shouldReceive('find')
->once()
->with(1)
->andReturn(Mockery::mock('MyNamespace\MyModel)
// The view may call $book->title, so I'm guessing I have to mock
// that call and it's returned value, but this doesn't work as it says
// 'Undefined property: Mockery\CompositeExpectation::$title'
->shouldReceive('getAttribute')
->andReturn('')
);
Edit:
I'm trying to test the controller's actions as in:
$this->call('GET', 'books/1'); // will call Controller#show(1)
The thing is, at the end of the controller, it returns a view:
$book = Repo::find(1);
return view('books.show', compact('book'));
So, the the test case also runs view method and if no $book is mocked, it is null and crashes
So you're trying to unit test your controller to make sure that the right methods are called with the expected arguments. The controller-method fetches a model from the repo and passes it to the view. So we have to make sure that
the find()-method is called on the repo
the repo returns a model
the returned model is passed to the view
But first things first:
What's the point of implementing repository pattern if I'm gonna end up mocking eloquent model anyway?
It has many purposes besides (testable) consisten data access rules through different sources, (testable) centralized cache strategies, etc. In this case, you're not testing the repository and you actually don't even care what's returned, you're just interested that certain methods are called. So in combination with the concept of dependency injection you now have a powerful tool: You can just switch the actual instance of the repo with the mock.
So let's say your controller looks like this:
class BookController extends Controller {
protected $repo;
public function __construct(MyNamespace\BookRepository $repo)
{
$this->repo = $repo;
}
public function show()
{
$book = $this->repo->find(1);
return View::make('books.show', compact('book'));
}
}
So now, within your test you just mock the repo and bind it to the container:
public function testShowBook()
{
// no need to mock this, just make sure you pass something
// to the view that is (or acts like) a book
$book = new MyNamespace\Book;
$bookRepoMock = Mockery::mock('MyNamespace\BookRepository');
// make sure the repo is queried with 1
// and you want it to return the book instanciated above
$bookRepoMock->shouldReceive('find')
->once()
->with(1)
->andReturn($book);
// bind your mock to the container, so whenever an instance of
// MyNamespace\BookRepository is needed (like in your controller),
// the mock will be loaded.
$this->app->instance('MyNamespace\BookRepository', $bookRepoMock);
// now trigger the controller method
$response = $this->call('GET', 'books/1');
$this->assertEquals(200, $response->getStatusCode());
// check if the controller passed what was returned from the repo
// to the view
$this->assertViewHas('book', $book);
}
//EDIT in response to the comment:
Now, in the first line of your testShowBook() you instantiate a new Book, which I am assuming is a subclass of Eloquent\Model. Wouldn't that invalidate the whole deal of inversion of control[...]? since if you change ORM, you'd still have to change Book so that it wouldn't be class of Model
Well... yes and no. Yes, I've instantiated the model-class in the test directly, but model in this context doesn't necessarily mean instance of Eloquent\Model but more like the model in model-view-controller. Eloquent is only the ORM and has a class named Model that you inherit from, but the model-class as itself is just an entity of the business logic. It could extend Eloquent, it could extend Doctrine, or it could extend nothing at all.
In the end it's just a class that holds the data that you pull e.g. from a database, from an architecture point of view it is not aware of any ORM, it just contains data. A Book might have an author attribute, maybe even a getAuthor() method, but it doesn't really make sense for a book to have a save() or find() method. But it does if you're using Eloquent. And it's ok, because it's convenient, and in small project there's nothing wrong with accessing it directly. But it's the repository's (or the controller's) job to deal with a specific ORM, not the model's. The actual model is sort of the outcome of an ORM-interaction.
So yes, it might be a little confusing that the model seems so tightly bound to the ORM in Laravel, but, again, it's very convenient and perfectly fine for most projects. In fact, you won't even notice it unless you're using it directly in your application code (e.g. Book::where(...)->get();) and then decide to switch from Eloquent to something like Doctrine - this would obviously break your application. But if this is all encapsulated behind a repository, the rest of your application won't even notice when you switch between databases or even ORMs.
So, you're working with repositories, so only the eloquent-implementation of the repository should actually be aware that Book also extends Eloquent\Model and that it can call a save() method on it. The point is that it doesn't (=shouldn't) matter if Book extends Model or not, it should still be instantiable anywhere in your application, because within your business logic it's just a Book, i.e. a Plain Old PHP Object with some attributes and methods describing a book and not the strategies how to find or persist the object. That's what repositories are for.
But yes, the absolute clean way is to have a BookInterface and then bind it to a specific implementation. So it could all look like this:
Interfaces:
interface BookInterface
{
/**
* Get the ISBN.
*
* #return string
*/
public function getISBN();
}
interface BookRepositoryInterface()
{
/**
* Find a book by the given Id.
*
* #return null|BookInterface
*/
public function find($id);
}
Concrete implementations:
class Book extends Model implements BookInterface
{
public function getISBN()
{
return $this->isbn;
}
}
class EloquentBookRepository implements BookRepositoryInterface
{
protected $book;
public function __construct(Model $book)
{
$this->book = $book;
}
public function find($id)
{
return $this->book->find($id);
}
}
And then bind the interfaces to the desired implementations:
App::bind('BookInterface', function()
{
return new Book;
});
App::bind('BookRepositoryInterface', function()
{
return new EloquentBookRepository(new Book);
});
It doesn't matter if Book extends Model or anything else, as long as it implements the BookInterface, it is a Book. That's why I bravely instantiated a new Book in the test. Because it doesn't matter if you change the ORM, it only matters if you have several implementations of the BookInterface, but that's not very likely (sensible?), I guess. But just to play it safe, now that it's bound to the IoC-Container, you can instantiate it like this in the test:
$book = $this->app->make('BookInterface');
which will return an instance of whatever implementation of Book you're currently using.
So, for better testability
Code to interfaces rather than concrete classes
Use Laravel's IoC-Container to bind interfaces to concrete implementations (including mocks)
Use dependency injection
I hope that makes sense.
I am interested in the best way to write unit tests for a class whose public API involves some kind of a flow, for example:
public class PaginatedWriter {
public void AppendLine(string line) { ... }
public IEnumerable<string> GetPages() { ... }
public int LinesPerPage { get; private set; }
}
This class paginates text lines into the given number of lines per page. In order to test this class, we may have something like:
public void AppendLine_EmptyLine_AddsEmptyLine() { ... }
public void AppendLine_NonemptyLine_AddsLine() { ... }
public void GetPages_ReturnsPages() {
writer.AppendLine("abc");
writer.AppendLine("def");
var output = writer.GetPages();
...
}
Now, my question is: is it OK to make calls to AppendLine() in the last test method, even though we are testing the GetPages() method?
I know one solution in such situations is to make AppendLine() virtual and override it but the problem is that AppendLine() manipulates internal state which I don't think should be the business of the unit test.
The way I see it is that tests usually follow a pattern like 'setup - operate - check - teardown'.
I concentrate most of the common setup and teardown in the respective functions.
But for test specific setup and teardown it is part of the test method.
I see nothing wrong with preparing the state of the Object Under Test using method calls of that object. In OOP I would not try to decouple the state from the operations since the paradigm goes to great lengths to unify them and if possible even hide the state. In my view the unit under test is the Class - state and methods.
I do make visual distinction in the code by separating the setup block from the operate block and the verify block with an empty line.
Yes, I'd say that's absolutely fine.
Test in whatever way you find practical. I find there's rather too much dogma around testing exactly one thing in each test method. That's a lovely ideal, but sometimes it's just not nearly as practical as slightly less pure alternatives.
I was wondering whether the object to test should be a field and thus set up during a SetUp method (ie. JUnit, nUnit, MS Test, …).
Consider the following examples (this is C♯ with MsTest, but the idea should be similar for any other language and testing framework):
public class SomeStuff
{
public string Value { get; private set; }
public SomeStuff(string value)
{
this.Value = value;
}
}
[TestClass]
public class SomeStuffTestWithSetUp
{
private string value;
private SomeStuff someStuff;
[TestInitialize]
public void MyTestInitialize()
{
this.value = Guid.NewGuid().ToString();
this.someStuff = new SomeStuff(this.value);
}
[TestCleanup]
public void MyTestCleanup()
{
this.someStuff = null;
this.value = string.Empty;
}
[TestMethod]
public void TestGetValue()
{
Assert.AreEqual(this.value, this.someStuff.Value);
}
}
[TestClass]
public class SomeStuffTestWithoutSetup
{
[TestMethod]
public void TestGetValue()
{
string value = Guid.NewGuid().ToString();
SomeStuff someStuff = new SomeStuff(value);
Assert.AreEqual(value, someStuff.Value);
}
}
Of course, with just one test method, the first example is much too long, but with more test methods, this could be safe quite some redundant code.
What are the pros and cons of each approach? Are there any “Best Practices”?
It's a slippery slope once you start initializing fields & generally setting up the context of your test within the test method itself. This leads to large test methods and really really unmanageable fixtures that don't explain themselves very well.
Instead, you should look at the BDD style naming & test organization. Make one fixture per context, rather than one fixture per system-under-test. Then your [setup] truly does setup the context, and your tests can be simple one-liner asserts.
It's much easier to read when you see a test output that does this:
OrderFulfillmentServiceTests.cs
with_an_order_from_a_new_customer
it should check their credit from the credit service
it should give no discount
with valid credit check
it should decrement inventory
it should ship the goods
with a customer in texas or california
it should add appropriate sales tax
with an order from a gold customer
it should NOT check credit
it should get expedited shipping added for free
Our tests are now really good documentation for our system. Each "with_an..." is a test fixture, and the items below it are tests. Within those, you setup the context (the state of the world as the class name describes) and then the test does the simple assert that verifies what the method name says it does.
The second approach is much more readable, and much easier to visually trace.
However, the first approach means less repetition.
What I've found is that I tend to use the SetUp to create objects (especially for things with a number of dependencies), and then set the values used in the test itself. From experience, this provides about the right amount of code-reuse versus readability/traceability.
From talking with Kent Beck about the design of jUnit I know that Test Classes were a way to share setup between Tests, so using the common initialization was the intent. However, along with that, that means splitting tests that require different setup into separate test classes that have revealing names.
Personally, I use Setup and Teardown methods for two distinct reasons, although I assume that others will have different reasons.
Use Setup and Teardown methods when there is common initiation logic that is used by all tests and a single instance of the object(s) created in the Setup are designed to be reused.
Use Setup and Teardown methods when the time it takes for creating and destroying any object(s) created takes enough time to slow down the unit testing process when repeated in each TestMethod.
To give you an idea of how often I run accross these scenarios, in a project that I am working on now, only two of my test classes (out of about eighty) have an explicit need for Setup and Teardown methods, both times it was to satisfy my second reason due to the 10 second max I have enabled for each test execution.
I also prefer the readability of having the object(s) created and destroyed within the TestMethod, although it is not a breaking or selling point for me.
The approach I take is somewhere in the middle - I use TearDown and SetUp to create a test "sandbox" directory (and delete it when done), as well as to initialize some test member variables with some default values that will be used to test the classes. I then set up some "helper methods" - One is generally called InstantiateClass() I use that to call with the default parameters (if any) which I can override as necessary in each explicit test.
[Test]
public void TestSomething()
{
_myVar = "value";
InstantiateClass();
RunTheClass();
Assert.IsTrue(this, that);
}
In practice, I find set up methods make it hard to reason about a test that is failing and have to scroll to somewhere near the top of the file (which can be very large) to figure out what collaborator has broken (not easy with mocking) and there is no clickable reference to navigate in your IDE. In short, you lose spatial locality.
Static helper methods reveal the collaborators more explicitly, and you avoid fields which unnecessarily widen the scope of variables.