Avoiding cyclic dependencies between entities in two modules - doctrine-orm

I’m currently wondering what’s a good way to keep ZF2 modules copyable from one project to another, if I have doctrine2 entities that reference each other.
My current situation is something like this: I have an entity User from which I want to be able to access all languages this user speaks.
Of course, Language is not a component of the Authentication module, because I might want to use it for other purposes, too.
namespace Authentication\Entity;
class User {
public function getSpokenLanguages();
}
And:
namespace Application\Entity;
class Language {
public function getUsersWhoSpeakThisLanguage();
}
The problem is, I want my Authentication module to be totally independent from the project-specific module Application.
Is there a good way to keep these relations out of my entities or possibly inject them from the Application module? Maybe also a UserService (in Application module) giving me the languages Language[] for a specific User would be a good idea? I could call it like this:
$userService->getUsersLanguages($user);
I think, especially injection might be a ZF2-solution, but I have no idea, how one could extend Doctrine2 entities like that from another module.

I think you're speaking to more of semantic issue than one specific to ZF2. Reading your question, I think your language becomes more of a managed layer that you can easily facilitate with factories and DI - luckily ZF2 has all the right tools. Consider something like this as a potential draft for a solution:
Create a LanguageAbstractFactory:
namespace Your\Namespace;
use Zend\ServiceManager\AbstractFactoryInterface,
Zend\ServiceManager\ServiceLocatorInterface;
class LanguageAbstractFactory implements AbstractFactoryInterface
{
/**
* Determine if we can create a service with name
*
* #param ServiceLocatorInterface $serviceLocator
* #param $name
* #param $requestedName
* #return bool
*/
public function canCreateServiceWithName( ServiceLocatorInterface $serviceLocator, $name, $requestedName )
{
return stristr( $requestedName, 'Namespace\Language' ) !== false;
}
public function createServiceWithName(ServiceLocatorInterface $locator, $name, $requestedName)
{
$filter = new $requestedName();
$filter->setServiceLocator( $locator );
return $filter;
}
}
Then, create your languages in that same namespace, as subclasses of Language, that implement ServiceLocatorAwareInterface (to give you database access down the road and such). The code in the factory above injects the service locator (and that's where you tweak it to inject other goodness to satisfy your language architecture):
namespace Your\Namespace;
use Zend\ServiceManager\ServiceLocatorAwareInterface,
Zend\ServiceManager\ServiceLocatorInterface;
class Language implements ServiceLocatorAwareInterface
{
protected $serviceLocator;
public function setServiceLocator(ServiceLocatorInterface $serviceLocator)
{
$this->serviceLocator = $serviceLocator;
}
public function getServiceLocator()
{
return $this->serviceLocator;
}
// ... other things your factory knows, that this class may not go here
}
A Language implementation might then look like:
namespace Your\Namespace\Language;
class English extends \Your\Namespace\Language
{
public function getUsersWhoSpeakThisLanguage()
{
$sm = $this->getServiceManager();
// get your entities or w/e using the SM
}
}
Connect the factory by tweaking your module's Module.php at getServiceConfig:
public function getServiceConfig() {
return array(
'abstract_factories' => array(
// this one generates all of the mass email filters
'Your\Namespace\LanguageAbstractFactory',
),
);
}
This gives you the ability to use the service manager to get a service-aware language very easily. e.g., from a service-aware class:
$sm->getServiceManager()->get( '\Your\Namespace\Language\English' );
Because of the config, and that the Factory can meet the request, your factory will auto-configure the English instance with whatever logic you build into it in a very portable fashion.
Where this is kind of a primer on Factories - if you rig an interface class that the service can use to speak to your user classes, you can invert control to the Language service from the User. Making your User implement LanguageAware (for example) which contains classes that the Language service can use should be a few steps away.
Hope this helps. There's probably 15 ways to skin this cat; this approach is one I have used to solve similar problems, e.g., that of "Filtering" data. A filter, can filter information, and information can be filtered by a filter.
Good Luck!

Related

How to write a test for bundle which it depends on some entity

I've been working on a grid bundle for Symfony. The bundle receives a Symfony Entity and based on that, it renders a gridview.
something like this:
class IndexController extends AbstractController
{
public function __construct(GridBuilder $grid, BookGrid $userGrid)
{
$this->grid = $grid;
$this->userGrid = $userGrid;
}
/**
* #Route("/")
*/
public function index()
{
return $this->render('index.html.twig', [
'grid' => $this->grid->build($this->userGrid),
]);
}
}
BookGrid is a class extended from BaseGridConfigurator which it has to implement getEntity method:
class BookGrid extends BaseGridConfigurator
{
public function getEntity()
{
return Book::class;
}
}
The GridBuilder uses the EntityRepository (in this case BookRepository) to get the entity's metadata such as fields, Repository and QueryBuilder.
If I want to write unit test for the bundle, I need an entity class to pass it to GridBuilder. I think there are two approaches to solve this problem.
Create a mock Entity and Repository
Create a real Entity and Repository class inside my test directory
My question is which approach is correct? and is there any other way to test a bundle that it depends on an entity?
Thank you
Assuming getEntity (which perhaps should be renamed to getEntityClass) is used by GridBuilder to obtain the desired entity repository from the entity manager internally, wouldn't it be easier to have BaseGridConfigurator provide access to the entity repository directly? E.g. getEntityRepository(): EntityRepository instead of getEntity(): string. I can imagine this would significantly reduce the amount of mocking you would have to do if all you need is the entity repository.
In any case, the Symfony documentation on the subject of unit testing entity repositories advice against unit testing entity repository dependent implementations in general.
But if you have to, I would focus on a design where your implementation needs as few contact points with the entity repository as possible in order to minimize the amount of mocking that the test requires. But would still opt for mocking over stubbing regardless.

Laravel Tests pass to model to View

I'm mocking my repository correctly, but in cases like show() it either returns null so the view ends up crashing the test because of calling property on null object.
I'm guessing I'm supposed to mock the eloquent model returned but I find 2 issues:
What's the point of implementing repository pattern if I'm gonna end up mocking eloquent model anyway
How do you mock them correctly? The code below gives me an error.
$this->mockRepository->shouldReceive('find')
->once()
->with(1)
->andReturn(Mockery::mock('MyNamespace\MyModel)
// The view may call $book->title, so I'm guessing I have to mock
// that call and it's returned value, but this doesn't work as it says
// 'Undefined property: Mockery\CompositeExpectation::$title'
->shouldReceive('getAttribute')
->andReturn('')
);
Edit:
I'm trying to test the controller's actions as in:
$this->call('GET', 'books/1'); // will call Controller#show(1)
The thing is, at the end of the controller, it returns a view:
$book = Repo::find(1);
return view('books.show', compact('book'));
So, the the test case also runs view method and if no $book is mocked, it is null and crashes
So you're trying to unit test your controller to make sure that the right methods are called with the expected arguments. The controller-method fetches a model from the repo and passes it to the view. So we have to make sure that
the find()-method is called on the repo
the repo returns a model
the returned model is passed to the view
But first things first:
What's the point of implementing repository pattern if I'm gonna end up mocking eloquent model anyway?
It has many purposes besides (testable) consisten data access rules through different sources, (testable) centralized cache strategies, etc. In this case, you're not testing the repository and you actually don't even care what's returned, you're just interested that certain methods are called. So in combination with the concept of dependency injection you now have a powerful tool: You can just switch the actual instance of the repo with the mock.
So let's say your controller looks like this:
class BookController extends Controller {
protected $repo;
public function __construct(MyNamespace\BookRepository $repo)
{
$this->repo = $repo;
}
public function show()
{
$book = $this->repo->find(1);
return View::make('books.show', compact('book'));
}
}
So now, within your test you just mock the repo and bind it to the container:
public function testShowBook()
{
// no need to mock this, just make sure you pass something
// to the view that is (or acts like) a book
$book = new MyNamespace\Book;
$bookRepoMock = Mockery::mock('MyNamespace\BookRepository');
// make sure the repo is queried with 1
// and you want it to return the book instanciated above
$bookRepoMock->shouldReceive('find')
->once()
->with(1)
->andReturn($book);
// bind your mock to the container, so whenever an instance of
// MyNamespace\BookRepository is needed (like in your controller),
// the mock will be loaded.
$this->app->instance('MyNamespace\BookRepository', $bookRepoMock);
// now trigger the controller method
$response = $this->call('GET', 'books/1');
$this->assertEquals(200, $response->getStatusCode());
// check if the controller passed what was returned from the repo
// to the view
$this->assertViewHas('book', $book);
}
//EDIT in response to the comment:
Now, in the first line of your testShowBook() you instantiate a new Book, which I am assuming is a subclass of Eloquent\Model. Wouldn't that invalidate the whole deal of inversion of control[...]? since if you change ORM, you'd still have to change Book so that it wouldn't be class of Model
Well... yes and no. Yes, I've instantiated the model-class in the test directly, but model in this context doesn't necessarily mean instance of Eloquent\Model but more like the model in model-view-controller. Eloquent is only the ORM and has a class named Model that you inherit from, but the model-class as itself is just an entity of the business logic. It could extend Eloquent, it could extend Doctrine, or it could extend nothing at all.
In the end it's just a class that holds the data that you pull e.g. from a database, from an architecture point of view it is not aware of any ORM, it just contains data. A Book might have an author attribute, maybe even a getAuthor() method, but it doesn't really make sense for a book to have a save() or find() method. But it does if you're using Eloquent. And it's ok, because it's convenient, and in small project there's nothing wrong with accessing it directly. But it's the repository's (or the controller's) job to deal with a specific ORM, not the model's. The actual model is sort of the outcome of an ORM-interaction.
So yes, it might be a little confusing that the model seems so tightly bound to the ORM in Laravel, but, again, it's very convenient and perfectly fine for most projects. In fact, you won't even notice it unless you're using it directly in your application code (e.g. Book::where(...)->get();) and then decide to switch from Eloquent to something like Doctrine - this would obviously break your application. But if this is all encapsulated behind a repository, the rest of your application won't even notice when you switch between databases or even ORMs.
So, you're working with repositories, so only the eloquent-implementation of the repository should actually be aware that Book also extends Eloquent\Model and that it can call a save() method on it. The point is that it doesn't (=shouldn't) matter if Book extends Model or not, it should still be instantiable anywhere in your application, because within your business logic it's just a Book, i.e. a Plain Old PHP Object with some attributes and methods describing a book and not the strategies how to find or persist the object. That's what repositories are for.
But yes, the absolute clean way is to have a BookInterface and then bind it to a specific implementation. So it could all look like this:
Interfaces:
interface BookInterface
{
/**
* Get the ISBN.
*
* #return string
*/
public function getISBN();
}
interface BookRepositoryInterface()
{
/**
* Find a book by the given Id.
*
* #return null|BookInterface
*/
public function find($id);
}
Concrete implementations:
class Book extends Model implements BookInterface
{
public function getISBN()
{
return $this->isbn;
}
}
class EloquentBookRepository implements BookRepositoryInterface
{
protected $book;
public function __construct(Model $book)
{
$this->book = $book;
}
public function find($id)
{
return $this->book->find($id);
}
}
And then bind the interfaces to the desired implementations:
App::bind('BookInterface', function()
{
return new Book;
});
App::bind('BookRepositoryInterface', function()
{
return new EloquentBookRepository(new Book);
});
It doesn't matter if Book extends Model or anything else, as long as it implements the BookInterface, it is a Book. That's why I bravely instantiated a new Book in the test. Because it doesn't matter if you change the ORM, it only matters if you have several implementations of the BookInterface, but that's not very likely (sensible?), I guess. But just to play it safe, now that it's bound to the IoC-Container, you can instantiate it like this in the test:
$book = $this->app->make('BookInterface');
which will return an instance of whatever implementation of Book you're currently using.
So, for better testability
Code to interfaces rather than concrete classes
Use Laravel's IoC-Container to bind interfaces to concrete implementations (including mocks)
Use dependency injection
I hope that makes sense.

Zend Framework 2 + Doctrine: get Entity Manager in Model

Edit 1: it seems like I didn't explain myself very well. Class Foo is not an entity. Just a general purpose model that I would like to have an access to the entity manager.
Edit 2: I don't think there is an answer to my question. Basically, I wanted a class that can have access to the EntityManager without this class being called by the service manager, simply due to the fact that it may be called by a class who is also not called by the service manager. In other words, I was trying to achieve what Zend_Registry used to achieve in ZF1. I'll have to find another way of doing what I am trying to do.
I am trying to access Doctrine's entity manager in a model, in a similar way as it done in a controller:
$this->getServiceLocator()->get('Doctrine\ORM\EntityManager');
The ZF2 manual (http://framework.zend.com/manual/2.0/en/modules/zend.service-manager.quick-start.html) says:
By default, the Zend Framework MVC registers an initializer that will inject the ServiceManager instance, which is an implementation of Zend\ServiceManager\ServiceLocatorInterface, into any class implementing Zend\ServiceManager\ServiceLocatorAwareInterface.
So I created a the following class:
<?php
namespace MyModule\Model;
use Zend\ServiceManager\ServiceLocatorAwareInterface;
use Zend\ServiceManager\ServiceLocatorInterface;
class Foo implements ServiceLocatorAwareInterface
{
protected $services;
public function setServiceLocator(ServiceLocatorInterface $serviceLocator)
{
$this->services = $serviceLocator;
}
public function getServiceLocator()
{
return $this->services;
}
public function test()
{
$em = $this->getServiceLocator()->get('Doctrine\ORM\EntityManager');
}
}
Then, from another class I call this class as such:
$foo = new \MyModule\Model\Foo();
$foo->test()
which throws the following error:
PHP Fatal error: Call to a member function get() on a non-object
So, I guess I am missing something somewhere, but what? Where? How? Perhaps there is an easier to access the entity manager?
Thanks!
From your question, I see that you have mainly two misunderstandings, one about your design strategy (injecting an EntityManager on your model) and one about how things work with the service manager (ServiceLocatorAwareInterface). In my answer I'll try to focus on the second one.
Initializers are php closures that are called over each instance accessed from the Service Manager before this one returns it to you.
Here is an example of an Initializer :
// Line 146 - 150 of Zend\Mvc\Service\ServiceManagerConfig class + comments
$serviceManager->addInitializer(function ($instance) use ($serviceManager) {
if ($instance instanceof ServiceManagerAwareInterface) {
$instance->setServiceManager($serviceManager);
}
});
As you can see each time Service Manager is asked to return an instance/object that implements the ServiceManagerAwareInterface interface, it will setup/inject the Service Manager instance to it.
By the way in your previous code you omitted to implement correctly the interface as you didn't define the setServiceManager method. However, this is not your only problem.
First, if you want the Service Manager to inject itself in your Model, you need to call/construct your model instance from it (during this process it will call the initializers) through a factory for example if your class has complex dependencies.
[EDIT]
Example:
In your MyModule
namespace MyModule;
use Zend\ModuleManager\Feature\ServiceProviderInterface;
use MyModule\Model\Foo;
class Module implements ServiceProviderInterface{
//Previous code
public function getServiceConfig()
{
return array(
'instances' => array(
'myModelClass' => new Foo(),
),
);
}
Now, when you need a Foo instance you should call the Service Manager:
$serviceManager->get('myModelClass');
Don't forget defining setServiceManager method, otherwise your'e not correctly implementing the ServiceManagerAwareInterface!
I think, the only thing you’re missing, is to add your model class to the list of invokables and retreive it through the service manager.
So basically add this to your module.conf.php:
return array(
'service_manager' => array(
'invokables' => array(
'MyModule\Model\Foo' => 'MyModule\Model\Foo',
),
),
);
And instantiate your model object like this (if in a controller):
$foo = $this->getServiceLocator()->get('MyModule\Model\Foo');
$foo->test();

Unit Of Work and Repository inter dependency

I have seen lots of posts (and debates!) about which way round UnitOfWork and Repository. One of the repository patterns I favor is the typed generic repository pattern, but I fear this had lead to some issues with clean code and testability. Take the following repository interface and generic class:
public interface IDataEntityRepository<T> : IDisposable where T : IDataEntity
{
// CRUD
int Create(T createObject);
// etc.
}
public class DataEntityRepository<T> : IDataEntityRepository<T> where T : class, IDataEntity
{
private IDbContext Context { get; set; }
public DataEntityRepository (IDbContext context)
{
Context = context;
}
private IDbSet<T> DbSet { get { return Context.Set<T>(); } }
public int Create(T CreateObject)
{
DbSet.Add(createObject);
}
// etc.
}
// where
public interface IDbContext
{
IDbSet<T> Set<T>() where T : class;
DbEntityEntry<T> Entry<T>(T readObject) where T : class;
int SaveChanges();
void Dispose();
}
So basically I am using the Context property in each pattern to gain access to the underlying context.
My problem is now this: when I create my unit of work, it will effectively be a wrapper of the context I need the repository to know about. So, if I have a Unit Of Work that declares the following:
public UserUnitOfWork(
IDataEntityRepository<User> userRepository,
IDataEntityRepository<Role> roleRepository)
{
_userRepository = userRepository;
_roleRepository = roleRepository;
}
private readonly IDataEntityRepository<User> _userRepository;
public IDataEntityRepository<User> UserRepository
{
get { return _userRepository; }
}
private readonly IDataEntityRepository<Role> _roleRepository;
public IDataEntityRepository<Role> RoleRepository
{
get { return _roleRepository; }
}
I have a problem with the fact that the two repositories I am passing in both need to be instantiated with the very Unit Of Work into which they are being passed. Obviously I could instantiate the repositories inside the constructor and pass in the "this" but that tightly couples my unit of work to a particular concrete instance of the repositories and makes unit testing that much harder.
I would be interested to know if anyone else has headed down this path and hit the same wall. Both these patterns are new to me so I could well be doing something fundamentally wrong. Any ideas would be much appreciated!
UPDATE (response to #MikeSW)
Hi Mike, many thanks for your input. I am working with EF Code First but I wanted to abstract certain elements so I could switch to a different data source or ORM if required and because I am (trying!) to push myself down a TDD route and using Mocking and IOC. I think I have realised the hard way that certain elements cannot be unit tested in a pure sense but can have integration tests! I'd like to raise your point about Repositories working with business objects or viewmodels etc. Perhaps I have misunderstood but if I have what I see as my core business objects (POCOs), and I then want to use an ORM such as EF code first to wrap around those entities in order to create, and then interact with, the database (and, it's possible, I may re-use these entities within a ViewModel), I would expect a Repository to handle these entities directly in the context of some set of CRUD operations. The entities know nothing about the persistence layer at all, neither would any ViewModel. My unit of work simply instantiates and holds the required repositories allowing a transaction commit to be performed (across multiple repositories but the same context/ session). What I have done in my solution is to remove the injection of an IDataEntityRepository ... etc. from the UnitOfWork constructor as this is a concrete class that must know about one and only one type of IDataEntityRepository it should be creating (in this case DataEntityRepository, which really should be bettered names as EFDataEntityRepository). I cannot unit test this per se because the whole unit logic would be to establish the repositories with a context (itself) to some database. It simply needs an integration test. Hope that makes sense?!
To avoid dependency on each repository in your Unit of Work, you could use a provider based on this contract:
public interface IRepositoryProvider
{
DbContext DbContext { get; set; }
IRepository<T> GetRepositoryForEntityType<T>() where T : class;
T GetRepository<T>(Func<DbContext, object> factory = null) where T : class;
void SetRepository<T>(T repository);
}
then you could inject it into your UoW that would look like this:
public class UserUnitOfWork: IUserUnitOfWork
{
public UserUnitOfWork(IRepositoryProvider repositoryProvider)
{
RepositoryProvider = repositoryProvider;
}
protected IDataEntityRepository<T> GetRepo<T>() where T : class
{
return RepositoryProvider.GetRepositoryForEntityType<T>();
}
public IDataEntityRepository<User> Users { get { return GetRepo<User>(); } }
public IDataEntityRepository<Role> Roles { get { return GetRepo<Role>(); } }
...
Apologies for the tardiness of my response - I have been trying out various approaches to this in the mean time. I have marked up the answers above because I agree with the comments made.
This is one of those questions where there is more than one answer and it's very much dependent upon the overall approach. Whilst I agree that EF effectively provides a ready-made unit of work pattern, my decision to create my own unit of work and repository layers was to be able to control access to the database entities.
Where I struggled was in the need to be able to inject a repository into a unit of work. What I realised though was that in the case of EF, my unit of work was effectively a thin wrapper around multiple repositories with a Commit (SaveChanges) method. It was not responsible for executing specific actions such as FindCustomer etc.
So I decided that a unit of work could be tightly coupled to its specific type of DataRepository pattern. To ensure I had a testable pattern, I introduced a service layer that provided the facade for executing particular actions such as CreateCustomer, FindCustomers etc. These services that accepted an IUnitOfWork constructor parameter which provided access to the repositories (as interfaces) as well as the Commit method.
I was then able to create fakes of both unit of work and/ or repositories for testing purposes. This just left me with the decision of what could be unit tested with fakes and what needed to be integration tested with the concrete instances.
And this also gives me the opportunity to control what actions are performed on the database and how they are performed.
I'm sure there are many ways to skin this particular cat but the goals of provided a clean interface that is testable have been just about met with this approach.
My thanks to g1ga and Mike for their input.
When using Entity Framework (EF) (which I assume you're using) you already have a generic repository IDbSet. It's useless to ad another layer on top just to call EF methods.
Also, a repository works with application objects (usually business objects, but they can be view models or objects state). If you're just using db entities, you kinda defeat the Repository pattern purpose ( to isolate the business bojects from the database). THe original pattern deals only with busines objects, but it is a useful pattern outside the business layer too.
The point is that EF entities are Persistence objects and have (or should have) no relation to your business objects. You want to use the repository pattern to 'translate' the busines objects to persistence objects and viceversa.
Sometimes it might happen that an application object (like a viewmodel) to be the same with a persistence entity (and in that case you can use directly EF objects) but that's a coincidence.
About Unit of Work (UoW), let's say that's tricky. Personally, I prefer to use the DDD (domain driven design) approach and consider that any business object (BO) sent to the repoistory is a UoW, so it will be wrapped in a transaction.
If I need to update multiple BOs, I'll use a message driven architecture to send commands to the relevant BOs. Of course, that's more complicated and requires to be at ease with the concept of eventual consistency but I'm not depending on a specific RDBMS.
If you know that you'll be using a specific RDBMS and that will never be changed, you could start a transaction and pass the associated connection to each repository, with a commit at the end (that will be the UoW). If you're in a web setting, it's even easier, start transaction when the request begins, commit when requests ends (you can use an ActionFilter for ASp.Net Mvc).
However this solution is tied up to one RDBMS, so it won't apply to a NoSql or any storage which doesn't support transactions. For those cases, the message driven way is the best.

What are the pros/cons to these 2 ways of defining parameters for a web service method

I have an existing web service I need to expand, but it has not gone into production yet. So, I am free to change the contracts as I see fit. But I am not sure of the best way to define the methods.
I am leaning towards Method 2 for no other reason than I cannot think of good names to give the parameters classes!
Are there any major disadvantages to using Method 2 over Method 1?
Method 1
[DataContract(Namespace = Constants.ServiceNamespace)]
public class MyParameters
{
[DataMember(Order = 1, IsRequired = true)]
public int CompanyID { get; set; }
[DataMember(Order = 2, IsRequired = true)]
public string Filter { get; set; }
}
[ServiceContract(Namespace = Constants.ServiceNamespace)]
public interface IMyService
{
[OperationContract, FaultContract(MyServiceFault)]
MyResult MyMethod(MyParameters params);
}
Method 2
public interface IMyService
{
[OperationContract, FaultContract(MyServiceFault)]
MyResult MyMethod(int companyID, string filter);
}
Assuming you are using WCF and the WS-I Basic Profile, the biggest disadvantage of Method 2 over Method 1 is that it makes evolution of the contract more difficult in the future. Parameters classes allow the addition of new fields without creating a new version of the contract, whereas a straight method call does not (because in WS-I Basic, overloaded methods are not allowed). In WCF, there are some hoops you can jump through to get around this restriction but it all lends towards a less readable, more configuration-heavy solution.
For naming parameters classes, I find it helps to think of the method in terms of the underlying message that it represents - the method name is an action, and the parameters are the message associated with that action. If you tell WCF to generate message contracts (when you add a service reference) you'll get to see all of that stuff, and it can sometimes help understand how it hangs together, although it does make the API more verbose and is unnecessary most of the time.