Laravel Tests pass to model to View - unit-testing

I'm mocking my repository correctly, but in cases like show() it either returns null so the view ends up crashing the test because of calling property on null object.
I'm guessing I'm supposed to mock the eloquent model returned but I find 2 issues:
What's the point of implementing repository pattern if I'm gonna end up mocking eloquent model anyway
How do you mock them correctly? The code below gives me an error.
$this->mockRepository->shouldReceive('find')
->once()
->with(1)
->andReturn(Mockery::mock('MyNamespace\MyModel)
// The view may call $book->title, so I'm guessing I have to mock
// that call and it's returned value, but this doesn't work as it says
// 'Undefined property: Mockery\CompositeExpectation::$title'
->shouldReceive('getAttribute')
->andReturn('')
);
Edit:
I'm trying to test the controller's actions as in:
$this->call('GET', 'books/1'); // will call Controller#show(1)
The thing is, at the end of the controller, it returns a view:
$book = Repo::find(1);
return view('books.show', compact('book'));
So, the the test case also runs view method and if no $book is mocked, it is null and crashes

So you're trying to unit test your controller to make sure that the right methods are called with the expected arguments. The controller-method fetches a model from the repo and passes it to the view. So we have to make sure that
the find()-method is called on the repo
the repo returns a model
the returned model is passed to the view
But first things first:
What's the point of implementing repository pattern if I'm gonna end up mocking eloquent model anyway?
It has many purposes besides (testable) consisten data access rules through different sources, (testable) centralized cache strategies, etc. In this case, you're not testing the repository and you actually don't even care what's returned, you're just interested that certain methods are called. So in combination with the concept of dependency injection you now have a powerful tool: You can just switch the actual instance of the repo with the mock.
So let's say your controller looks like this:
class BookController extends Controller {
protected $repo;
public function __construct(MyNamespace\BookRepository $repo)
{
$this->repo = $repo;
}
public function show()
{
$book = $this->repo->find(1);
return View::make('books.show', compact('book'));
}
}
So now, within your test you just mock the repo and bind it to the container:
public function testShowBook()
{
// no need to mock this, just make sure you pass something
// to the view that is (or acts like) a book
$book = new MyNamespace\Book;
$bookRepoMock = Mockery::mock('MyNamespace\BookRepository');
// make sure the repo is queried with 1
// and you want it to return the book instanciated above
$bookRepoMock->shouldReceive('find')
->once()
->with(1)
->andReturn($book);
// bind your mock to the container, so whenever an instance of
// MyNamespace\BookRepository is needed (like in your controller),
// the mock will be loaded.
$this->app->instance('MyNamespace\BookRepository', $bookRepoMock);
// now trigger the controller method
$response = $this->call('GET', 'books/1');
$this->assertEquals(200, $response->getStatusCode());
// check if the controller passed what was returned from the repo
// to the view
$this->assertViewHas('book', $book);
}
//EDIT in response to the comment:
Now, in the first line of your testShowBook() you instantiate a new Book, which I am assuming is a subclass of Eloquent\Model. Wouldn't that invalidate the whole deal of inversion of control[...]? since if you change ORM, you'd still have to change Book so that it wouldn't be class of Model
Well... yes and no. Yes, I've instantiated the model-class in the test directly, but model in this context doesn't necessarily mean instance of Eloquent\Model but more like the model in model-view-controller. Eloquent is only the ORM and has a class named Model that you inherit from, but the model-class as itself is just an entity of the business logic. It could extend Eloquent, it could extend Doctrine, or it could extend nothing at all.
In the end it's just a class that holds the data that you pull e.g. from a database, from an architecture point of view it is not aware of any ORM, it just contains data. A Book might have an author attribute, maybe even a getAuthor() method, but it doesn't really make sense for a book to have a save() or find() method. But it does if you're using Eloquent. And it's ok, because it's convenient, and in small project there's nothing wrong with accessing it directly. But it's the repository's (or the controller's) job to deal with a specific ORM, not the model's. The actual model is sort of the outcome of an ORM-interaction.
So yes, it might be a little confusing that the model seems so tightly bound to the ORM in Laravel, but, again, it's very convenient and perfectly fine for most projects. In fact, you won't even notice it unless you're using it directly in your application code (e.g. Book::where(...)->get();) and then decide to switch from Eloquent to something like Doctrine - this would obviously break your application. But if this is all encapsulated behind a repository, the rest of your application won't even notice when you switch between databases or even ORMs.
So, you're working with repositories, so only the eloquent-implementation of the repository should actually be aware that Book also extends Eloquent\Model and that it can call a save() method on it. The point is that it doesn't (=shouldn't) matter if Book extends Model or not, it should still be instantiable anywhere in your application, because within your business logic it's just a Book, i.e. a Plain Old PHP Object with some attributes and methods describing a book and not the strategies how to find or persist the object. That's what repositories are for.
But yes, the absolute clean way is to have a BookInterface and then bind it to a specific implementation. So it could all look like this:
Interfaces:
interface BookInterface
{
/**
* Get the ISBN.
*
* #return string
*/
public function getISBN();
}
interface BookRepositoryInterface()
{
/**
* Find a book by the given Id.
*
* #return null|BookInterface
*/
public function find($id);
}
Concrete implementations:
class Book extends Model implements BookInterface
{
public function getISBN()
{
return $this->isbn;
}
}
class EloquentBookRepository implements BookRepositoryInterface
{
protected $book;
public function __construct(Model $book)
{
$this->book = $book;
}
public function find($id)
{
return $this->book->find($id);
}
}
And then bind the interfaces to the desired implementations:
App::bind('BookInterface', function()
{
return new Book;
});
App::bind('BookRepositoryInterface', function()
{
return new EloquentBookRepository(new Book);
});
It doesn't matter if Book extends Model or anything else, as long as it implements the BookInterface, it is a Book. That's why I bravely instantiated a new Book in the test. Because it doesn't matter if you change the ORM, it only matters if you have several implementations of the BookInterface, but that's not very likely (sensible?), I guess. But just to play it safe, now that it's bound to the IoC-Container, you can instantiate it like this in the test:
$book = $this->app->make('BookInterface');
which will return an instance of whatever implementation of Book you're currently using.
So, for better testability
Code to interfaces rather than concrete classes
Use Laravel's IoC-Container to bind interfaces to concrete implementations (including mocks)
Use dependency injection
I hope that makes sense.

Related

How to write a test for bundle which it depends on some entity

I've been working on a grid bundle for Symfony. The bundle receives a Symfony Entity and based on that, it renders a gridview.
something like this:
class IndexController extends AbstractController
{
public function __construct(GridBuilder $grid, BookGrid $userGrid)
{
$this->grid = $grid;
$this->userGrid = $userGrid;
}
/**
* #Route("/")
*/
public function index()
{
return $this->render('index.html.twig', [
'grid' => $this->grid->build($this->userGrid),
]);
}
}
BookGrid is a class extended from BaseGridConfigurator which it has to implement getEntity method:
class BookGrid extends BaseGridConfigurator
{
public function getEntity()
{
return Book::class;
}
}
The GridBuilder uses the EntityRepository (in this case BookRepository) to get the entity's metadata such as fields, Repository and QueryBuilder.
If I want to write unit test for the bundle, I need an entity class to pass it to GridBuilder. I think there are two approaches to solve this problem.
Create a mock Entity and Repository
Create a real Entity and Repository class inside my test directory
My question is which approach is correct? and is there any other way to test a bundle that it depends on an entity?
Thank you
Assuming getEntity (which perhaps should be renamed to getEntityClass) is used by GridBuilder to obtain the desired entity repository from the entity manager internally, wouldn't it be easier to have BaseGridConfigurator provide access to the entity repository directly? E.g. getEntityRepository(): EntityRepository instead of getEntity(): string. I can imagine this would significantly reduce the amount of mocking you would have to do if all you need is the entity repository.
In any case, the Symfony documentation on the subject of unit testing entity repositories advice against unit testing entity repository dependent implementations in general.
But if you have to, I would focus on a design where your implementation needs as few contact points with the entity repository as possible in order to minimize the amount of mocking that the test requires. But would still opt for mocking over stubbing regardless.

Unit Of Work and Repository inter dependency

I have seen lots of posts (and debates!) about which way round UnitOfWork and Repository. One of the repository patterns I favor is the typed generic repository pattern, but I fear this had lead to some issues with clean code and testability. Take the following repository interface and generic class:
public interface IDataEntityRepository<T> : IDisposable where T : IDataEntity
{
// CRUD
int Create(T createObject);
// etc.
}
public class DataEntityRepository<T> : IDataEntityRepository<T> where T : class, IDataEntity
{
private IDbContext Context { get; set; }
public DataEntityRepository (IDbContext context)
{
Context = context;
}
private IDbSet<T> DbSet { get { return Context.Set<T>(); } }
public int Create(T CreateObject)
{
DbSet.Add(createObject);
}
// etc.
}
// where
public interface IDbContext
{
IDbSet<T> Set<T>() where T : class;
DbEntityEntry<T> Entry<T>(T readObject) where T : class;
int SaveChanges();
void Dispose();
}
So basically I am using the Context property in each pattern to gain access to the underlying context.
My problem is now this: when I create my unit of work, it will effectively be a wrapper of the context I need the repository to know about. So, if I have a Unit Of Work that declares the following:
public UserUnitOfWork(
IDataEntityRepository<User> userRepository,
IDataEntityRepository<Role> roleRepository)
{
_userRepository = userRepository;
_roleRepository = roleRepository;
}
private readonly IDataEntityRepository<User> _userRepository;
public IDataEntityRepository<User> UserRepository
{
get { return _userRepository; }
}
private readonly IDataEntityRepository<Role> _roleRepository;
public IDataEntityRepository<Role> RoleRepository
{
get { return _roleRepository; }
}
I have a problem with the fact that the two repositories I am passing in both need to be instantiated with the very Unit Of Work into which they are being passed. Obviously I could instantiate the repositories inside the constructor and pass in the "this" but that tightly couples my unit of work to a particular concrete instance of the repositories and makes unit testing that much harder.
I would be interested to know if anyone else has headed down this path and hit the same wall. Both these patterns are new to me so I could well be doing something fundamentally wrong. Any ideas would be much appreciated!
UPDATE (response to #MikeSW)
Hi Mike, many thanks for your input. I am working with EF Code First but I wanted to abstract certain elements so I could switch to a different data source or ORM if required and because I am (trying!) to push myself down a TDD route and using Mocking and IOC. I think I have realised the hard way that certain elements cannot be unit tested in a pure sense but can have integration tests! I'd like to raise your point about Repositories working with business objects or viewmodels etc. Perhaps I have misunderstood but if I have what I see as my core business objects (POCOs), and I then want to use an ORM such as EF code first to wrap around those entities in order to create, and then interact with, the database (and, it's possible, I may re-use these entities within a ViewModel), I would expect a Repository to handle these entities directly in the context of some set of CRUD operations. The entities know nothing about the persistence layer at all, neither would any ViewModel. My unit of work simply instantiates and holds the required repositories allowing a transaction commit to be performed (across multiple repositories but the same context/ session). What I have done in my solution is to remove the injection of an IDataEntityRepository ... etc. from the UnitOfWork constructor as this is a concrete class that must know about one and only one type of IDataEntityRepository it should be creating (in this case DataEntityRepository, which really should be bettered names as EFDataEntityRepository). I cannot unit test this per se because the whole unit logic would be to establish the repositories with a context (itself) to some database. It simply needs an integration test. Hope that makes sense?!
To avoid dependency on each repository in your Unit of Work, you could use a provider based on this contract:
public interface IRepositoryProvider
{
DbContext DbContext { get; set; }
IRepository<T> GetRepositoryForEntityType<T>() where T : class;
T GetRepository<T>(Func<DbContext, object> factory = null) where T : class;
void SetRepository<T>(T repository);
}
then you could inject it into your UoW that would look like this:
public class UserUnitOfWork: IUserUnitOfWork
{
public UserUnitOfWork(IRepositoryProvider repositoryProvider)
{
RepositoryProvider = repositoryProvider;
}
protected IDataEntityRepository<T> GetRepo<T>() where T : class
{
return RepositoryProvider.GetRepositoryForEntityType<T>();
}
public IDataEntityRepository<User> Users { get { return GetRepo<User>(); } }
public IDataEntityRepository<Role> Roles { get { return GetRepo<Role>(); } }
...
Apologies for the tardiness of my response - I have been trying out various approaches to this in the mean time. I have marked up the answers above because I agree with the comments made.
This is one of those questions where there is more than one answer and it's very much dependent upon the overall approach. Whilst I agree that EF effectively provides a ready-made unit of work pattern, my decision to create my own unit of work and repository layers was to be able to control access to the database entities.
Where I struggled was in the need to be able to inject a repository into a unit of work. What I realised though was that in the case of EF, my unit of work was effectively a thin wrapper around multiple repositories with a Commit (SaveChanges) method. It was not responsible for executing specific actions such as FindCustomer etc.
So I decided that a unit of work could be tightly coupled to its specific type of DataRepository pattern. To ensure I had a testable pattern, I introduced a service layer that provided the facade for executing particular actions such as CreateCustomer, FindCustomers etc. These services that accepted an IUnitOfWork constructor parameter which provided access to the repositories (as interfaces) as well as the Commit method.
I was then able to create fakes of both unit of work and/ or repositories for testing purposes. This just left me with the decision of what could be unit tested with fakes and what needed to be integration tested with the concrete instances.
And this also gives me the opportunity to control what actions are performed on the database and how they are performed.
I'm sure there are many ways to skin this particular cat but the goals of provided a clean interface that is testable have been just about met with this approach.
My thanks to g1ga and Mike for their input.
When using Entity Framework (EF) (which I assume you're using) you already have a generic repository IDbSet. It's useless to ad another layer on top just to call EF methods.
Also, a repository works with application objects (usually business objects, but they can be view models or objects state). If you're just using db entities, you kinda defeat the Repository pattern purpose ( to isolate the business bojects from the database). THe original pattern deals only with busines objects, but it is a useful pattern outside the business layer too.
The point is that EF entities are Persistence objects and have (or should have) no relation to your business objects. You want to use the repository pattern to 'translate' the busines objects to persistence objects and viceversa.
Sometimes it might happen that an application object (like a viewmodel) to be the same with a persistence entity (and in that case you can use directly EF objects) but that's a coincidence.
About Unit of Work (UoW), let's say that's tricky. Personally, I prefer to use the DDD (domain driven design) approach and consider that any business object (BO) sent to the repoistory is a UoW, so it will be wrapped in a transaction.
If I need to update multiple BOs, I'll use a message driven architecture to send commands to the relevant BOs. Of course, that's more complicated and requires to be at ease with the concept of eventual consistency but I'm not depending on a specific RDBMS.
If you know that you'll be using a specific RDBMS and that will never be changed, you could start a transaction and pass the associated connection to each repository, with a commit at the end (that will be the UoW). If you're in a web setting, it's even easier, start transaction when the request begins, commit when requests ends (you can use an ActionFilter for ASp.Net Mvc).
However this solution is tied up to one RDBMS, so it won't apply to a NoSql or any storage which doesn't support transactions. For those cases, the message driven way is the best.

Unit Testing: How to test a subclass in isolation from its parent?

I'm getting started unit testing my PHP application with PHPUnit. I understand that it's important for unit tests to run in isolation so you know where to look when a test fails. One thing I am struggling to understand is how to test subclasses in isolation from their parent. For example, most of my models extend a "base model" which has most of the features that a model should have.
<?php
class BaseModel
{
public function save($data) {
// write $data to the database
$dbAdapter->save($data);
}
}
class RegularModel extends BaseModel
{
public function save($data) {
// clean up $data before passing it to parent
if (isset($data['foo'])) {
unset($data['foo']);
$data['bar'] = 'foo';
}
parent::save($data);
}
}
# Unit Test
class RegularModelTest extends PHPUnit_Framework_TestCase
{
public function testSaveMethodCallsParent() {
$data = array('foo' => 'yes');
$model = new RegularModel();
$model->save($data);
// assert parent received data correctly
}
}
I'm not sure how to test my RegularModel without calling a bunch of unnecessary code. I'm also doing some autoloading so when it calls save on the parent, it will actually try to save to the test database. I'd rather mock this out since I don't care about whether or not it actually writes to the database when I'm testing my RegularModel only when I am testing my BaseModel. Or am I thinking about this all wrong? What do you recommend when it comes to testing situations like this?
Your best bet is to mock the $dbAdapter when testing RegularModel. While you are still executing the parent class's code, that code really is part of the RegularModel unit due to the is-a relationship.
The only way around it would be to provide a different implementation of BaseModel. You can either run these tests in a separate process or use Runkit to swap in the other implementation at runtime. Both of these have drawbacks--complexity, performance degredation, and instability--that come at too high a price in my view.
Subclasses are, by definition, tightly coupled to their superclasses so practically speaking there is no way to test a subclass in isolation.
However, if the superclass has an extensive test suite that passes then you normally can be confident that you can test the subclass by just covering the methods implemented in the subclass. The superclass test suite covers the superclass functionality.

fake directories for .net unit testing

I'm trying to create a unit test for a code similar to this:
foreach (string domainName in Directory.GetDirectories(server.Path))
{
HandleDomainDirectory(session, server, domainName);
}
The problem is that I'm using the System.IO.Directory class in my code.
How can I create a testing method that won't be dependent on any folder I have on my hard disk.
In other words, How can I fake the response of "Directory.GetDirectories(server.Path)"?
(Please note, I do control the "server" object in my class, therefore i can give any path i want)
Thanks.
Rather than calling Directory.GetDirectories(server.Path) directly, you could create an interface like IDirectoryResolver with a single method that takes a path string and returns the list of directories. The class containing your code above would then need a property or field of type IDirectoryResolver, which can be injected through the constructor or a setter.
For your production code, you would then create a new class that implements the IDirectoryResolver interface. This class could use the Directory.GetDirectories method in its implementation of the interface method.
For unit testing, you could create a MockDirectoryResolver class which implements IDirectoryResolver (or use a mocking library to create a mock instance for the interface). The mock implementation can do whatever you need it to do.
You would inject a wrapper class.
public class DirectoryFetcher
{
public virtual List<string> GetDirectoriesIn(string directory)
{
return Directory.GetDirectories(directory);
}
}
And then inject that:
foreach(string directory in _directoryFetcher.GetDirectoriesIn(server.Path))
{
// Whatever
}
You can then Mock that guy at the injection point (this example uses Moq, and constructor injection):
Mock<DirectoryFetcher> mockFetcher = new Mock<DirectoryFetcher>();
mockFetcher.Setup(x => x.GetDirectoriesIn("SomeDirectory")).Returns(new List<string>
{
"SampleDirectory1",
"SampleDirectory2"
});
MyObjectToTest testObj = new MyObjectToTest(mockFetcher.Object);
// Do Test
When communicating with the outside world, such as file system, databases, web services etc. , you should always consider using wrapper classes like the others before me suggested. Testability is one major argument, but an even bigger one is: The out side world changes, and you have no control over it. Folders move, user rights changes, new disk drives appears and old ones are removed. You only want to care about stuff like that in one place. Hence, the wrapper -- let's call it DirectoryResolver like Andy White suggested ealier.
So, wrap your file system calls, extract an interface, and inject that interface where you need to communicate with the file system.
The best solution I've found was to use Moles. The code is very specific, and must do very specific thing. Wrapping it with wrapper class will be redundant. The only reason I needed wrapper class is in order to write tests. Moles allows me to write the tests without any wrapper class :)

Proper application of Mock objects in Unit Testing

I've got a PresenterFactory that creates Presenter classes based on a Role parameter. Specifically, the Role parameter is an external class which I cannot control (IE 3rd party.)
My factory looks something like this:
public class PresenterFactory {
public Presenter CreatePresenter(Role role, ...) {
if (role.IsUserA("Manager")) {
return new ManagerPresenter(...)
}
if (role.IsUserA("Employee")) {
return new EmployeePresenter(...)
}
}
}
I'm stuck on how to write the unit test for this since creating the Role object forces a database access. I thought that I could Mock this object. My test looked like this:
public void TestPresenterFactory()
{
var mockRole = new Mock<Role>();
mockRole .Setup(role=> role.IsUserA("Manager"))
.Returns(true)
.AtMostOnce();
PresenterFactory.CreatePresenter(mockRole.Object, ...);
mockUserInfo.VerifyAll();
}
However I receive an ArguementException:
Invalid setup on a non-overridable member: role=> role.IsUserA("Manager")
I'm not sure where to go and sure could use some course correction. What am I doing wrong?
You can create a wrapper object for Role that has all the same methods and properties, but is mockable, and the default implementation simply returns the underlying Role's implementation.
Then your tests can use the wrapper Role to set up the desired behaviour.
This is often a way to get around concrete classes that really need mocking.
What you want to mock is the creation of a Role object, then pass that mock object into your CreatePresenter method. On the mock you would set whatever properties required to determine what kind of user it is. If you still have dependencies on the database at this point, then you might look at refactoring your Role object.
Consider using a mocking framework that does not impose artificial constraints (such as requirements for methods to be virtual, for classes to not be sealed, etc) on how your code must be written to be mockable. The only example of such that I'm aware of in .NET context is TypeMock.
In Java when using EasyMock extensions you would be able to mock "real" objects and methods, most likely there's equivalent or alternative mock framework that you can use for your purpose