For example:
We have entities PhotoGallery and Photo (one to many, cascade={"persist", "remove"}). If user delete PhotoGallery - all photos will be deleted automaticly from database. But photo files was not deleted.
How I can make onDelete function in entity Photo, that executed before/after deleting entity, and I can make deleting photo files from HDD?
You can use Lifecycle events in your Photo model like below.
Add #HasLifecycleCallbacks annotation in the class definition:
/** #Entity #HasLifecycleCallbacks */
class Photo{
....
You can use the #preRemove or #postRemove event in order delete the image from HDD:
/** #PreRemove */
public function preRemoveEvent()
{
// code to delete the image here
}
The EntityManager and UnitOfWork trigger a bunch of events during the life-time of their registered entities. Said that, for example, #preRemove event is triggered before the EntityManager operation is executed:
$em->remove($entity);
$em->flush();
Finally, I have seen that you already declared the cascate (one to many, cascade={"persist", "remove"}) as should be. The photo now will be removed from db and hdd.
A very clean and comfortable approach is to use the Uploadable Doctrine behavior extension.
You can find everything about it in the docs:
Uploadable behavior provides the tools to manage the persistence of files with Doctrine 2, including automatic handling of moving, renaming and removal of files and other features.
Features:
Extension moves, removes and renames files according to configuration automatically
Lots of options: Allow overwrite, append a number if file exists, filename generators, post-move callbacks, etc.
It can be extended to work not only with uploaded files, but with files coming from any source (an URL, another file in the same server, etc).
Validation of size and mime type
https://github.com/l3pp4rd/DoctrineExtensions/blob/master/doc/uploadable.md
Related
Without scrutinizing why I want this (it may sound like a bad approach, but I have good reason) I want to know if there is a way in the standard-framework-edition 3.1+ to create a relational association to an entity that may not exist...
Firstly I do realize this determines the schema and that's fine. So if an entity does not exist, it doesn't create a foreign key and the field is always null, or if the target entity does exist, it creates the foreign key and the field works like a normal association...
Secondly, this only changes project to project, and may change down the line as an update to which I realize a manual schema update could be necessary.
Preferably without 3rd party bundle dependencies... hoping for the standard framework to do this,
Anybody?
Thanks in advance
Edit
I am using annotations in my entities with doctrine ORM
Furthermore
The simplest version of why I am doing this is because certain bundles are optional project-to-project, and bundle A may make use of entities in bundle B only if it is present. I have considered using services and if container->has then container->get, or the XML on-invalid="null" approach, but that doesn't address property persistence. I was happy with storing a non-mapped value as a custom relational field, which is fine, just lengthier and wondered if perhaps there was a way Doctrine could ignore a missing targetEntity...
Hm, perhaps I misunderstand your question, but this sounds like a normal 'nullable' association to me?
Create your assocation via annotation:
/**
*
* #var Child
* #ORM\ManyToOne(targetEntity="Child")
*/
private $child;
and use
setChild(Child $child = null)
{
$this->child = $child;
}
as a Setter to allow nullable values.
And your getter might look like:
getChild()
{
return $this->child;
}
In case there isn't any child it will return null.
I will keep the other answer as it responds to the question for a 'nullable association target' live data.
This is the answer for a 'nullable association target' meta data which is a different thing.
OP asks to provide a targetEntity in the metadata which cannot exist in his case, e.g. is not there in a different bundle (or whatever OP's mysterious reason might be).
In that case I recommend to build upon Doctrine's TargetEntityListener which is able to resolve the targetEntity during runtime and targetEntity can be set to an Abstract Class or an Interface:
/**
* #ORM\ManyToOne(targetEntity="Acme\InvoiceBundle\Model\InvoiceSubjectInterface")
* #var InvoiceSubjectInterface
*/
protected $subject;
InvoiceSubjectInterface will then be replaced during runtime by a specific class provided by config e.g.:
# app/config/config.yml
doctrine:
# ...
orm:
# ...
resolve_target_entities:
Acme\InvoiceBundle\Model\InvoiceSubjectInterface: AppBundle\Entity\Customer
So this should be eiter an extendable behaviour for providing no class or implementing an own solution.
I have an issue TYPO3 Flow updating my relations.
Am I wrong, that Flow should update changed relations automatically, so I don't have to update the related entities with the respective repository?
Example 1:
I have a model "Project" with multiple "Job" childs on attribute "jobs".
If I do:
$project->setJobs($collectionOfJobs);
$this->projectRepository->update($project);
then jobs are not updated correctly with the new project-id.
Example 2:
I wanted to realize a bidirectional one-to-one relationship between the models "Project" and "Briefing" and found out, that there is a known bug in TYPO3:
Bidirectional One-To-One Relationships in Flow
So I wanted to to fix it with setting the relation on the other side manually:
class Briefing {
/**
* #param \Some\Package\Domain\Model\Project $project
* #return void
*/
public function setProject($project) {
$this->project = $project;
$this->project->setBriefing($this);
$this->projectRepository->update($this->project); // FIXME: Bug? Flow should do this
}
but I had to update the relation with its repository by self. Shouldn't Flow do this automatically?
So do I really need to update each child with its repository by self or should Flow do this for me?
Environment:
- TYPO3 FLOW 2.3.3 (latest stable)
- Doctrine 2.3.6
- PHP 5.4.39-0+deb7u2
From the Flow manual:
When you add or remove an object to or from a repository, the object will be added to or removed from the underlying persistence as expected upon persistAll. But what about changes to already persisted objects? As we have seen, those changes are only persisted, if the changed object is given to update on the corresponding repository.
Now, for objects that have no corresponding repository, how are changes persisted? In the same way you fetch those objects from their parent - by traversal. TYPO3 Flow follows references from objects managed in a repository (aggregate roots) for all persistence operations, unless the referenced object itself is an aggregate root.
So if there is a repository for your entity, you must explicitely call the update method. This was different originally but changed for Flow 1.0.
Maybe you think that this should work because it worked in TYPO3 CMS Extbase < 6.2 until it was changed there, too.
Example:
A SalesOrder is composed of a SalesOrderHeader and one or more SalesOrderItems. When editing an existing SalesOrder, the SalesOrderHeader can be modified and SalesOrderItems can be added, modified and deleted. All changes must be saved in a single transaction. Multiple users may edit the SalesOrder at the same time with optimistic concurrency.
I believe that the requirement to have the save done in a single transaction encourages us to communicate both the SaleOrderHeader and the SalesOrderItems in a single service call. The implication of packaging up the child data with its parent is that there will need to be some understanding as to whether the child data is added, modified or deleted.
Change tracking of the child entities can happen either on the server or on the client.
Change tracking on the server
The idea with this strategy is that the client can modify the SalesOrder to its will without tracking which SalesOrderItems are added, modified or deleted. The state of the SalesOrderItems will be determined on the server when the save service is called.
The server should remain stateless between service calls. This means that the server can’t retain any information about the state of the SalesOrder between its retrieval and its eventual save. The only option left if for the server to determine the state of its entities by comparing the modified object graph to the database object graph.
With nHibernate, there is a merge function to accomplish this. With Entity framework, the highest voted feature request is to have this added. There’s also an open source implementation of this for EF called GraphDiff.
This sounds great in theory because it makes the services very easy to design and use. However, I see two major issues with this strategy. The first is performance. The entire object graph must be sent back on every save. Whether or not a SalesOrderItem was modified, it must be sent back or the server will assume it’s been deleted. The second problem is even more critical and it has to do with concurrency. If User 1 adds a SalesOrderItem to a SalesOrder and User 2 makes a change to the same SalesOrder, when User 2 saves the server will assume that the SalesOrderItem added by User 1 should be deleted because it was not included in User 2’s object graph. I don’t see a way this can be prevented in any implementation of server side change tracking.
Change tracking on the client
The alternative is to have the client track changes to its entities and communicate that state when calling the save service. One benefit is that the client does not need to send its unchanged child entities. This helps with performance. A downside is that all entities will need an additional property named something along the lines of “ObjectState” to track whether it’s added, modified or deleted. This makes the entity models on the server quite messy and filled with concerns unrelated to the business domain. This also puts onus on the different consumers of the service to maintain this state. Another problem is that it becomes difficult to deal with deleted entities. Should the SalesOrderHeader maintain a list of deleted SalesOrderItems? or should the SalesOrderItems get assigned a state of deleted which must be filtered out by the client UI?
I know that breeze javascript library has its own implementation of client-side entity tracking but my concern is that its implementation requires both client-side and server-side components. Shouldn't the service layer isolate which technology we use on either side? What if non-javascript clients want to use my services?
Question
I would think this is a common scenario that should be addressed by the majority of service implementations. Have I made any incorrect assumptions or am I doing anything out or the ordinary? What strategy have you implemented? Are there any reasonable alternatives?
Full disclosure: I work with Breeze, and I think change tracking on the client is the way to go. Change tracking on the client allows stateless servers, reduces traffic between the client and server, and allows offline use.
In Breeze, the "ObjectState" that you mention is called the EntityAspect, and each entity has one, but it is not part of the domain model. The server-side entities don't need an EntityAspect, but the server-side service has to know how to handle the entity state information that comes from the client.
Basically, the service needs to create, update, or delete entities based on the information coming from the client. There are existing server-side backends for Breeze that do all this already (in .NET (EF and NHibernate), Java, PHP, Node, and Ruby), but you can also write your own. Your server just needs to know how to talk to the client.
Let's say we've updated a SalesOrder and added a new SalesOrderItem. The Breeze client sends a save bundle that looks something like this:
{
"entities": [
{
"Id": 123,
"Title": "My Updated Title",
"OrderDate": "2014-08-03T07:00:00.000Z",
"entityAspect": {
"entityTypeName": "SalesOrder:#My.DomainModel",
"entityState": "Modified",
"originalValuesMap": {
"Title": "My Original Title"
},
"autoGeneratedKey": {
"propertyName": "Id",
"autoGeneratedKeyType": "Identity"
}
}
},
{
"Id": -1,
"SalesOrderId": 123,
"ProductId": 456,
"Quantity": 11,
"entityAspect": {
"entityTypeName": "SalesOrderItem:#My.DomainModel",
"entityState": "Added",
"originalValuesMap": {
},
"autoGeneratedKey": {
"propertyName": "Id",
"autoGeneratedKeyType": "Identity"
}
}
}
]
}
Here, SalesOrder with Id# 123 has been modified (its Title has been changed). The entityAspect includes the originalValuesMap which shows what the previous Title was.
The server would need to update the existing SalesOrder with the new value. Whether the server needs to query the existing SalesOrder from the database before applying the changes is implementation-dependent.
A new SalesOrderItem has been added. A temporary Id, -1, was created for it on the client. The server needs to create and persist a new SalesOrderItem and generate a real Id for it.
The response from the server should contain the entities that were created and updated, and KeyMapping information that shows what server-generated keys map to the temporary client-side keys, so that the client can replace them.
Change tracking is not a simple problem, but Breeze tries to do the hard parts for you.
I'd like to piggy back on Steve's answer.
We should be clear: the onus for implementing the Order-graph (AKA "Order aggregate") transaction in a relational data model falls on the developer. BreezeJS (and Breeze helpers for .NET servers) can facilitate but you have to make it work.
The key to making this work is including the root element of the aggregate - the Order - in all changes to any entity within the aggregate. If you add, delete, or modify an OrderItem, make sure you modify the Order at the same time .
How? By bumping the Order's concurrency property (e.g, the rowVersion) and making sure that Breeze KNOWS this is your concurrency property.
You must implement root entity optimistic concurrency if you want to ensure Order aggregate consistency.
Now you can detect if someone else has made a change to any part of the Order aggregate. That could be a change to the Order or an add/mod/delete of one of its OrderItems.
You do not have to include all OrderItems in the change-set when you save a changed Order aggregate. You only need to include the OrderItems that are added/modified/deleted.
Of course some other user may make a change to the Order aggregate before you save yours. When you try to save yours, the save will fail with an optimistic concurrency error.
Upon detecting an optimistic concurrency error for an Order, make sure the client removes the entire order aggregate from cache - the Order and all of its OrderItems - and then re-fetch the aggregate Don't just re-fetch the root Order entity and start messing with its items. Make sure you remove the entire aggregate from cache and then re-fetch it (the order and its items).
If everyone follows this protocol you'll be in fine shape on the server.
I'm new to symfony2 and doctrine.
here is the problem as I see it.
i cannot use :
$repository = $this->getDoctrine()->getRepository('entity');
$my_object = $repository->findOneBy($index);
on an object that is persisted, BUT NOT FLUSHED YET !!
i think getRepository read from DB, so it will not find a not-flushed object.
my question: how to read those objects that are persisted (i think they are somewhere in a "doctrine session") to re-use them before i do flush my entire batch ?
every profile has 256 physical plumes.
every profile has 1 plumeOptions record assigned to it.
In plumeOptions, I have a cartridgeplume which is a FK for PhysicalPlume.
every plume is identified by ID (auto-generated) and an INDEX (user-generated).
rule: I say profile 1 has physical_plume_index number 3 (=index) connected to it.
now, I want to copy a profile with all its related data to another profile.
new profile is created. New 256 plumes are created and copied from older profile.
i want to link the new profile to the new plume index 3.
check here: http://pastebin.com/WFa8vkt1
I think you might want to have a look at this function:
$entityManager->getUnitOfWork()->getScheduledEntityInsertions()
Gives you back a list of entity objects which are persisting yet.
Hmm, I didn't really read your question well, with the above you will retrieve a full list (as an array) but you cannot query it like with getRepository. I will try found something for u..
I think you might look at the problem from the wrong angle. Doctrine is your persistance layer and database access layer. It is the responsibility of your domain model to provide access to objects once they are in memory. So the problem boils down to how do you get a reference to an object without the persistance layer?
Where do you create the object you need to get hold of later? Can the method/service that create the object return a reference to the controller so it can propagate it to the other place you need it? Can you dispatch an event that you listen to elsewhere in your application to get hold of the object?
In my opinion, Doctrine should be used at the startup of the application (as early as possible), to initialize the domain model, and at the shutdown of the application, to persist any changes to the domain model during the request. To use a repository to get hold of objects in the middle of a request is, in my opinion, probably a code smell and you should look at how the application flow can be refactored to remove that need.
Your is a business logic problem effectively.
Querying down the Database a findby Query on Object that are not flushed yet, means heaving much more the DB layer querying object that you have already in your function scope.
Also Keep in mind a findOneBy will retrieve also other object previously saved with same features.
If you need to find only among those new created objects, you should make f.e. them in a Session Array Variable, and iterate them with the foreach.
If you need a mix of already saved items + some new items, you should threate the 2 parts separately, one with a foreach , other one with the repository query!
Suppose you have the canonical Customer domain object. You have three different screens on which Customer is displayed: External Admin, Internal Admin, and Update Account.
Suppose further that each screen displays only a subset of all of the data contained in the Customer object.
The problem is: when the UI passes data back from each screen (e.g. through a DTO), it contains only that subset of a full Customer domain object. So when you send that DTO to the Customer Factory to re-create the Customer object, you have only part of the Customer.
Then you send this Customer to your Customer Repository to save it, and a bunch of data will get wiped out because it isn't there. Tragedy ensues.
So the question is: how would you deal with this problem?
Some of my ideas:
include an argument to the
Repository indicating which part of
the Customer to update, and ignore
others
when you load the Customer, keep it in static memory, or in the session, or wherever, and then when you receive one of the DTOs from the UI, update only the parts relevant to the DTO
IMO, both of these are kludges. Are there any other better ideas?
#chadmyers: Here is the problem.
Entity has properties A, B, C, and D.
DTO #1 contains properties for B and C.
DTO #2 contains properties for C and D.
UI asks for DTO #1, you load entity from the repository, convert it into DTO #1, filling in only B and C, and give it to the UI.
Now UI updates B and sends the DTO back. You recreate the entity and it has only B and C filled in because that is all that is contained in the DTO.
Now you want to save the entity, which has only B and C filled in, with A and D null/blank. The repository has no way of knowing if it should update A and D in persistence as blanks, or whether it should ignore them.
I would use factory to load a complete customer object from repository upon receipt of DTO. After that you can update only those fields that were specified in DTO.
That also allows you to apply some optimistic concurrency on your customer by checking last-updated timestamp, for example.
Is this a web app? Load the customer object from the repo, update it from the DTO, save it back. That doesn't seem like a kludge to me. :)
UPDATE: As per your updates (the A, B, C, D example)
So what I was thinking is that when you load the entity, it has A, B, C, and D filled in. If DTO#1 only updates B & C, that's OK. A and D are unaffected (which is the desired situation).
What the repository does with the B & C updates is up to him. If you're using Hibernate/NHibernate, for example, it will just figure it out and issue an update.
Just because DTO #1 only has B & C doesn't mean you have to also null out A & D. Just leave them alone.
I missed the point of this question at first because it is predicated on a few things that I don't think make sense from a design perspective.
Hydrating an entity from repository and then converting it to a DTO is a waste of effort. I assume that your DAL passes a DTO to your repository which then converts it to a full entity object. So converting it back to a DTO seems wasteful.
Having multiple DTOs makes sense if you have a search results page that shows a high volume of records and only displays part of your entity data. In that case it's efficient to pass that page just the data it needs. It does not make sense to pass a DTO that contains partial data to a CRUD page. Just give it a full DTO or even a full entity object. If it doesn't use all of the data, fine, no harm done.
So that main problem is that I don't think you should pass data to these pages using partial DTOs. If you used a full DTO, I would do the following 3 steps whenever the save action is performed:
Pull the full DTO from repository or db
Update the DTO with any changes made through the form
Save the full DTO back to the repository or db
This method requires an extra db hit but that's really not a significant issue on a CRUD form.
If we have an understanding that a Repository handles (almost exclusively) very rich domain Entity, then you numerous DTO's could simply map back.
i.e.
dtoUser.MapFrom<In,Out>(Entity)
or
dtoAdmin.MapFrom<In,Out>(Entity)
you would do the reverse to get the dto information back to the Entity and so on. So your repository only saves rich Entity's NOT numerous DTO's
entity.Foo = dtoUser.Foo
or
entity.Bar = dtoAdmin.Bar
entityRepsotiry.Save(entity) <-- do not pass DTO.
The whole point of DTO's is to keep things simple for the presentation or say for WCF dataTransfer, it has nothing to do with the Repository or the Entity for that matter.
Furthermore, you should never construct an Entity from DTO's... the only two ways to ever acquire an Entity is through a Factory(new) or a Repository(existing) respectively.
You mention storing the Entity somewhere, why would you do this? That is the job of your repository. It will decide where to get the Entity(db,cache,e.t.c), no need to store it somewhere else.
Hope that helps assign responsibility in your domain, it is always a challenge and there are gray area's here and there but in general, these are the typical uses of Repository, DTO e.t.c.