Keep two modules with doctrine entities independent - doctrine-orm

I work on two ZF2 modules in the same project who both contains severals doctrine2 entities. Theses entities have relations between modules.
Eg:
Module1\Entities\Entity1
has a Many to One relation with
Module2\Entities\Entity2
Is there a way to keep the two modules independents? I mean, if Module1 is the core module, I'd like to desactivate the module2 without conflict.
Found nothing about that.
Thanx !

Doctrine 2 has what they call a resolveTargetEntityListener: which might serve your purposes. Symfony 2 has an implementation for it: It's possible that ZF2 has one as well.
But the bottom line is that you designed a dependency between two bundles and it's unreasonable to expect to be able to remove one. Your best bet would probably be to remove the relation between the two entities and just use two queries to grab what you need. Possibly use event listeners to communicate between the bundles.

Related

Should Project Object Model contain separate test cases?

I couldn't find the answer I'm looking for so I decided to ask more experienced testers. I've implemented POM to my automation tests. I have A few different objects representing different sections of the website. I consider if I should create sepearate test case for each object ? I mean create separate .py files and import the libraries all over again or just import the objects to the one .py file which respresents the test ? Which approach will be more approporiate ?
As every pattern, which should be a reusable solution - there is a specific context in play, one that makes it reasonable to utilize such pattern. I guess you've considered all considerations before jumping into POMs right away:
Built Page Objects are very often hard to maintain and to use. Very well thought design is needed in grouping of elements into headers and footers, or identified widgets, there shouldn't just be a big list of stuff, names should be readable as well, enough to explain what they were for
It might limit your design, e.g. you staring to ignore better abstractions.
Not enough flexibility, especially for refactoring (both structure and implementation).
POM is wrong by design, as it clearly violates SRP, by keeping elements map and the actions upon them.
As to
if I should create sepearate test case for each object ? I mean create separate .py files and import the libraries all over again or just import the objects to the one .py file which respresents the test ?
Your Test harness, should keep three separate concerns:
test execution engine (core logic)
the test scripts
test data
So, simply put - don't mess scripts with POMs.
Here is a Github repo dedicated to Automation in testing and design patterns. Fell free to use it!

Django - How to Modularize a large web Application?

I'm trying to figure out the best way to build a large web application, but keeping the various logical sections modularized. For example, there will be a number of different logical sections:
Estimates
Work Orders
Accounting
Contacts
etc...
As much as possible, I want to have the code for each of these modules disconnected from all the other modules. If code from one section starts to throw errors or cause incorrect data, I don't want that to affect other modules.
My first thought is just to separate by using Django 'apps'. Each module is its own app. Then, if I build each app to be "pluggable", then the code is self sufficient. However, the problem with this approach is that the modules will likely need to access the models of other modules. For example, the 'Accounting' module will need access to the 'Contacts' model, since we want to send invoices to people in our contacts list.
I looked into having a completely separate Django project for each module, but that poses problems (I think) for user authentication. Plus, to my knowledge, multiple Django projects can't (easily) share one database.
Just wondering if there are any good ways or best practices to make logically separate 'modules' while, as much as possible, keeping the code from one module completely isolated from the rest, but still having a cohesive web application.

How independent should Django apps be from one another?

I'm having trouble determining how I should split up the functions of my project into different apps.
Simple example: We have members, and members can have one more more services. Services can be upgraded, downgraded, other services added on, and can also be cancelled. (This is extremely simplfied, were it that simple in reality I'd use a pre-made solution)
My first thought was to make this into a 'member' application, and then a 'services' app that takes care of renewals, up/downgrades and cancellations.
I then thought I should probably make a renewal app, an up/downgrade app, and a cancellation app. But, these apps would all depend on the same table(s) in the DB (members and services). I thought applications were supposed to be independent from one another. Is it ok to make applications that are heavily dependent on other apps models?
Along the same lines, which application should I use to store the models to create the services table if so many apps use it?
I think you first thought was right: you don't get so many benefits of splitting everythin into multiple apps, and on the contrary it could become messy and hard to mantain.
The Django way of doing things depends a lot of the models. Each object is mapped to an entity on the data model. Your apps are mostly organised in relation to such data model. So, if you have an entity (service) that has different pieces, it is better to understand such pieces as parts of the same thing. The other entity (member) should be another one since it is a different thing.
There is no penalty of importing models from different apps. The most important thing is anyway building data model to be consistent.
The point of apps is to allow code which is intended to be reused as an addon by third parties. You probably won't want to split your projects up much, if at all into apps.

XSLT dependencies across OSGI Bundles

I have been researching OSGI to determine its viability for updating an existing project. The project currently consists of modules (which are basically just directories) that contain XSL Transforms. The transforms contain dependencies on transforms from other modules in the form of xsl:import and xsl:include statements. The reason I am considering OSGI is because as the number of modules increase, it is becoming more difficult to keep track of the dependencies and effectively test the modules.
Is it possible using the OSGI framework to declare XML/XSLT resources contained in a bundle and reference these resources in the import statements of XSL Transforms in a separate bundle.
Yes, this works as Lukasz indicated, you need to write a simple URIResolver based on the extender model. An interesting approach is to use the Provide-Capability and Require-Capability headers to model the dependencies. This will allow you to handle the dependencies with good diagnostics, allows you run multiple versions side-by-side, and it will work with OBR, a resolver that can find the missing parts. See http://www.osgi.org/blog/2012/03/requirements-and-capabilities.html
And this would be the first time I see use of the fact that XSLT is XML ... you could write a simple style sheet that generated the Require-Capability headers! :-)
Your question seems very interesting. Personally, I am working on a system that has two bundles. One bundle contains XSLT Processor implementation (we are using Saxon) while the second one contains multiple XSLT files (which make usage of xsl:import instruction). And it works cool in OSGi environment (Fuse ESB actually) however we needed to implement javax.xml.transform.URIResolver interface and pass it to converter.
I suppose you would need to use the similar approach. Hope this helps.
I would just use Maven for dependency management if I were you - it's simpler to set up your dependencies and it handles transitive dependencies very well indeed. Use OSGi if you need to be able to change the XSL modules at run-time. In both cases you'll need to implement the URIResolver mentioned in the other answer.

Django best practice: number of model classes per file / directory structure

Another best-practice question to those with experience: How many models do you put in one file?
I've seen many examples that stuff all model classes into a single "models.py" file, which really feels wrong to me. In previous projects using other stacks I've gone with one file per model class. How is it done properly in Django for non-trivial applications with, say, 20 model classes? What would the directory structure look like?
Many people, especially those coming from the Rails world, get hung up on the term "application" and start throwing absolutely everything into a single app. In Django, an application is a single-use module that does one thing and does it well. Each application should be describable in one or two short sentences. A "project" is a collection of applications unified by a settings file. Even for something as complicated as an online store , something I'm discovering now, having more than four or five models in a single application is a warning sign: It was better that the store application (which has the shopping cart) be dependent upon upon the product application than have both in the same app. The same was true of invoices and payments, and so on.
Take a look at Django in the Real World, Jacob Kaplan-Moss's presentation about how to write Django applications.
A Django application is encapsulation: it describes one simple object (or collection of objects) and its API. Having 20 models sounds like you don't have a clean API, and therefore a clear concept, of what this app does.
The answer you want is "it depends on what your application does." To that end, what the Hell does an application with 20 models do, anyway? Grab your copy of Refactoring and have fun.