Here is the stripped down version of my code:
#Entity
public class Item implements Serializable{
#Id
#GeneratedValue
private long id;
#ElementCollection(fetch=FetchType.EAGER ,targetClass=Cost.class)
#CollectionTable(name="ItemCost", joinColumns = {#JoinColumn(name="itemId")})
private Set<Cost> costs= new HashSet<Cost>();
#ElementCollection(fetch=FetchType.EAGER ,targetClass=ItemLocation.class)
#CollectionTable(name="ItemLocation", joinColumns = {#JoinColumn(name="itemId")})
private Set<ItemLocation> itemLocations;
}
Is the above code allowed? I have two embeddable classes Cost and ItemLocation that I am using with #ElementCollection.
Issue:
When I try to run a named query
#NamedQuery(name = "Item.findAll", query = "SELECT i FROM Item i")
I have strange behavior. The records in the second elementcollection (ItemLccation table) are getting doubled (inserted into the table).
What it comes to JPA 2.0, your code is allowed. It is perfectly legal to have more than one collections that are annotated with ElementCollection. Also, it most likely does not have anything to do with problem you have. By the way, to find out is that really your problem, had you tried your code without costs collection?
In which point exactly duplicates in this collection occur first time? If ItemLocation does not define equals&hashcode, duplicates can easily come as result of adding items by yourself.
Possibly you are facing this problem: Primary keys in CollectionTable and chancing type to list and adding #OrderColumn will help.
Related
Typically when you implement a entity using Doctrine you map it to a table explicitly:
<?php
/**
* #Entity
* #Table(name="message")
*/
class Message
{
//...
}
Or you reply on doctrine to implicitly map your class name to a table...I have several tables which are identical in schema but I do not wish to re-create the class for each time...there fore at runtime (dynamically) I would like to change the table name accordingly.
Where do I start or what would I look into overriding to implement this odd requirement???
Surprisingly (to me), the solution is very simple. All you have to do is to get the ClassMetadata of your entity and change the name of the table it maps to:
/** #var EntityManager $em */
$class = $em->getClassMetadata('Message');
$class->setPrimaryTable(['name' => 'message_23']);
You need to be careful and do not change the table name after you have loaded some entities of type Message and changed them. It's a big chance it will either produce SQL errors on saving (because of table constraints, for example), if you are lucky or it will modify the wrong row (from the new table).
I suggest the following workflow:
set the desired table name;
load some entities;
modify them at will;
save them;
detach them from the entity manager (the method EntityManager::clear() is a quick way to start over);
go back to step 1 (i.e. repeat using another table).
The step #5 (detach the entities from the entity manager) is useful even if you don't change or don't save the entities. It allows the entity manager use less memory and work faster.
This is just one of the many methods you can use to dynamically set/change the mapping. Take a look at the documentation of class ClassMetadata for the rest of them. You can find more inspiration in the documentation page of the PHP mapping.
I have a many-to-one relationship that I want to be nullable:
#ManyToOne(optional = true)
#JoinColumn(name = "customer_id", nullable = true)
private Customer customer;
Unfortunately, JPA keeps setting the column in my database as NOT NULL. Can anyone explain this? Is there a way to make it work? Note that I use JBoss 7, JPA 2.0 with Hibernate as persistence provider and a PostgreSQL 9.1 database.
EDIT:
I found the cause of my problem. Apparently it is due to the way I defined the primary key in the referenced entity Customer:
#Entity
#Table
public class Customer {
#Id
#GeneratedValue
#Column(columnDefinition="serial")
private int id;
}
It seems that using #Column(columnDefinition="serial") for the primary key automatically sets the foreign keys referencing it to NOT NULL in the database. Is that really the expected behavior when specifying the column type as serial? Is there a workaround for enabling nullable foreign keys in this case?
Thank you in advance.
I found the solution to my problem. The way the primary key is defined in entity Customer is fine, the problem resides in the foreign key declaration. It should be declared like this:
#ManyToOne
#JoinColumn(columnDefinition="integer", name="customer_id")
private Customer customer;
Indeed, if the attribute columnDefinition="integer" is omitted the foreign key will by default be set as the source column: a not-null serial with its own sequence. That is of course not what we want as we just want the to reference the auto-incremented ID, not to create a new one.
Besides, it seems that the attribute name=customer_id is also required as I observed when performing some testing. Otherwise the foreign key column will still be set as the source column. This is a strange behavior in my opinion. Comments or additional information to clarify this are welcome!
Finally, the advantage of this solution is that the ID is generated by the database (not by JPA) and thus we do not have to worry about it when inserting data manually or through scripts which often happens in data migration or maintenance.
I came across this problem but I was able to solve it this way:
#ManyToOne
#JoinColumn(nullable = true)
private Customer customer;
Maybe the problem emerged from declaring #ManyToOne(optional = true)
That is very weird.
In JPA nullable parameter is true by default. I use this kind of configuration all the time and it works fine. If you try to save entity it should be successful.
Did you try to delete table that is created for this relationship? Maybe you have legacy table with that column?
Or maybe you should try to find solution on other chunks of code, because this is proper configuration.
Note: I have tried this configuration on PostgreSQL with JPA2 and Hibernate.
EDIT
In that case maybe you can try a little bit different definition of primary key.
For example you can use definition like this:
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column()
private Long id;
and postgresql will generate
id bigint NOT NULL
-- with constraint
CONSTRAINT some_table_pkey PRIMARY KEY (id)
If this is good enough you can try this solution.
within transaction but before the save operation, explicitly set the foreign key column value as null. By this hibernate ,never perform select queries for this foreign key related table and don't throw the exception "save the transient instance before flushing". if you want to set "null value " conditionally, then perform 1. fetch & set the value using repo call get/ find 2. then check the fetched value for the condition and set it to null accordingly .pasted the code below which is tested and found working
// Transaction Start
Optional<Customer> customerObject = customerRepository.findByCustomerId(customer.getCustomerId())
if(customerObject.isPresent())yourEnclosingEntityObject.setCustomer(customerObject)}
else {yourEnclosingEntityObject.setCustomer(null)}
yourEnclosingEntityObjectRepository.save(yourEnclosingEntityObject)
// Transaction End
I have a query that looks like this:
My user entity has a one-to-one relation that looks like this:
/**
* #var UserProfile
*
* #ORM\OneToOne(targetEntity="UserProfile",mappedBy="user")
*/
private $userProfile;
Anytime I make a query to select multiple user objects, it creates an additional select statement per user to query for the UserProfile data even though I am not accessing it through a get method. I don't always need the UserProfile data, and I certainly don't want to load this data every single time I'm displaying a list of users.
Any idea why these queries are executed at run time?
Here is solutions explain with details :
https://groups.google.com/forum/#!topic/doctrine-user/fkIaKxifDqc
"fetch" in the mapping is a hint, that is, if it is possible
Doctrine does that, but if its not possible, obviously it does not.
Proxying for lazy-loading is simply not always possible, technically.
The situations where its not possible are:
1) one-to-one from inverse to owning side (appears only in
bidirectional one-to-one associations). Precondition a) above can not
be met. 2) one-to-one/many-to-one association to a hierarchy and the
targeted class has subclasses (is not a leaf in the class hierarchy).
Precondition b) above can not be met.
In these cases, proxying is technically not possible.
Your options to avoid this n+1 problem:
1) fetch-join via DQL: "select c,ca from Customer join c.cart ca".
Single query but join, however, joins on to-one associations are
relatively cheap.
2) force partial objects. No additional queries but
also no lazy-load: $query->setHint(Query::HINT_FORCE_PARTIAL_LOAD,
true)
3) if an alternative result format (i.e. getArrayResult()) is
sufficient for a use-case, these also avoid this problem.
Benjamin had some ideas about automatic batching of these loads to
avoid n+1 queries but this does not change the fact that proxying is
not always possible.
I spent a lot of time searching for a solution. For me, none of the options were satisfying enough, but maybe I can save someone some time with this list of workarounds:
1) Change the owning side and inverse side http://developer.happyr.com/choose-owning-side-in-onetoone-relation - I don't think that's right from a DB design perspective every time.
2) In functions like find, findAll, etc, the inverse side in OneToOne is joined automatically (it's always like fetch EAGER). But in DQL, it's not working like fetch EAGER and that costs the additional queries. Possible solution is every time to join with the inverse entity
3) If an alternative result format (i.e. getArrayResult()) is sufficient for some use-cases, that could also avoid this problem.
4) Change inverse side to be OneToMany - just looks wrong, maybe could be a temporary workaround.
5) Force partial objects. No additional queries but also no lazy-loading: $query->setHint (Query::HINT_FORCE_PARTIAL_LOAD, true) - seams to me the only possible solution, but not without a price:
Partial Objects are a little bit risky, because your entity behavior is not normal. For example if you not specify in ->select() all associations that you will user you can have an error because your object will not be full, all not specifically selected associations will be null
6) Not mapping the inverse bi-directional OneToOne association and either use an explicit service or a more active record approach - https://github.com/doctrine/doctrine2/pull/970#issuecomment-38383961 - And it looks like Doctrine closed the issue
It seems that this is a open issue in Doctrine, see also
4.7.1. Why is an extra SQL query executed every time I fetch an entity with a one-to-one relation?
If Doctrine detects that you are fetching an inverse side one-to-one association it has to execute an additional query to load this object, because it cannot know if there is no such object (setting null) or if it should set a proxy and which id this proxy has.
To solve this problem currently a query has to be executed to find out this information.
Source
As #apfelbox explained... there is no fix for it now.
I went for a OneToMany solution in a combination with unique key:
User.php
/**
* #ORM\OneToMany(targetEntity="TB\UserBundle\Entity\Settings", fetch="EXTRA_LAZY", mappedBy="user", cascade={"all"})
*/
protected $settings;
/**
* #return \Doctrine\Common\Collections\Collection
*/
public function getSettings()
{
return $this->settings;
}
And
Settings.php
/**
* #ORM\ManyToOne(targetEntity="TB\UserBundle\Entity\User", fetch="EXTRA_LAZY", inversedBy="settings")
* #ORM\JoinColumn(name="user_id", referencedColumnName="id", nullable=false)
*/
protected $user;
And to ensure the uniqueness in Settings.php include:
use Doctrine\ORM\Mapping\UniqueConstraint;
And add unique index
/**
* #ORM\Entity
* #ORM\Table(name="user_settings", uniqueConstraints={#UniqueConstraint(name="user", columns={"user_id"})})
*/
class Settings
So when I want to access the user Settings I just need to this (which will fire ONE query ONLY in that specific moment)
$_settings = $user->getSettings()->current();
I think is the cleanest solution.
There is another option (which is the best IMHO) - you could use unidirectional OneToOne.
In your case - if you use UserProfile rarely - setup link in UserProfile
/**
* #var User
*
* #ORM\OneToOne(targetEntity="User")
*/
private $user;
And just dont map it in User. You could load it, when you will need it.
If you use UserProfile often - you could make it part of User entity.
According to the reference you can add the optional attribute fetch
/**
* #var UserProfile
*
* #ORM\OneToOne(targetEntity="UserProfile",mappedBy="user", fetch="LAZY")
*/
private $userProfile;
I'm pretty new to Doctrine and wondering how to efficiently calculate the number of related objects there are for a particular model object.
I read here that it's not a great idea to use the entity manager within models so I'm wondering how I would query the database to find out without lazy loading all of the related models and doing a count().
I haven't really found a great answer yet, but it seems like this is a pretty fundamental thing?
For example
class House
{
/**
* #var Room
*/
protected $rooms
public function getRoomCount()
{
// Cant use entity manager here?
}
}
class Room
{
// Shed loads of stuff in here
}
Doctrine 2 will get counts for you automatically as association properties are actually Doctrine Collection objects:
public function getRoomCount()
{
return $this->rooms->count();
}
If you mark the association as eager, Doctrine will load the rooms whenever you query for house entities. If you mark them as lazy (the default), Doctrine won't load the rooms until you actually access the $this->rooms property.
As of Doctrine 2.1 you can mark associations as extra lazy. This means that calling $this->rooms->count() won't load the rooms, it will just issue a COUNT query to the database.
You can read about extra lazy collections here: http://www.doctrine-project.org/docs/orm/2.1/en/tutorials/extra-lazy-associations.html
I have a method that builds and runs a Criteria query. The query does what I want it to, specifically it filters (and sorts) records based on user input.
Also, the query size is restricted to the number of records on the screen. This is important because the data table can be potentially very large.
However, if filters are applied, I want to count the number of records that would be returned if the query was not limited. So this means running two queries: one to fetch the records and then one to count the records that are in the overall set. It looks like this:
public List<Log> runQuery(TableQueryParameters tqp) {
// get the builder, query, and root
CriteriaBuilder builder = em.getCriteriaBuilder();
CriteriaQuery<Log> query = builder.createQuery(Log.class);
Root<Log> root = query.from(Log.class);
// build the requested filters
Predicate filter = null;
for (TableQueryParameters.FilterTerm ft : tqp.getFilterTerms()) {
// this section runs trough the user input and constructs the
// predicate
}
if (filter != null) query.where(filter);
// attach the requested ordering
List<Order> orders = new ArrayList<Order>();
for (TableQueryParameters.SortTerm st : tqp.getActiveSortTerms()) {
// this section constructs the Order objects
}
if (!orders.isEmpty()) query.orderBy(orders);
// run the query
TypedQuery<Log> typedQuery = em.createQuery(query);
typedQuery.setFirstResult((int) tqp.getStartRecord());
typedQuery.setMaxResults(tqp.getPageSize());
List<Log> list = typedQuery.getResultList();
// if we need the result size, fetch it now
if (tqp.isNeedResultSize()) {
CriteriaQuery<Long> countQuery = builder.createQuery(Long.class);
countQuery.select(builder.count(countQuery.from(Log.class)));
if (filter != null) countQuery.where(filter);
tqp.setResultSize(em.createQuery(countQuery).getSingleResult().intValue());
}
return list;
}
As a result, I call createQuery twice on the same CriteriaBuilder and I share the Predicate object (filter) between both of them. When I run the second query, I sometimes get the following message:
Exception [EclipseLink-6089] (Eclipse Persistence Services -
2.2.0.v20110202-r8913):
org.eclipse.persistence.exceptions.QueryException Exception
Description: The expression has not been initialized correctly. Only
a single ExpressionBuilder should be used for a query. For parallel
expressions, the query class must be provided to the ExpressionBuilder
constructor, and the query's ExpressionBuilder must always be on the
left side of the expression. Expression: [ Base
com.myqwip.database.Log] Query: ReportQuery(referenceClass=Log ) at
org.eclipse.persistence.exceptions.QueryException.noExpressionBuilderFound(QueryException.java:874)
at
org.eclipse.persistence.expressions.ExpressionBuilder.getDescriptor(ExpressionBuilder.java:195)
at
org.eclipse.persistence.internal.expressions.DataExpression.getMapping(DataExpression.java:214)
Can someone tell me why this error shows up intermittently, and what I should do to fix this?
Short answer to the question : Yes you can, but only sequentially.
In the method above, you start creating the first query, then start creating the second, the execute the second, then execute the first.
I had the exact same problem. I don't know why it's intermittent tough.
I other words, you start creating your first query, and before having finished it, you start creating and executing another.
Hibernate doesn't complain but eclipselink doesn't like it.
If you just start by the query count, execute it, and then create and execute the other query (what you've done by splitting it in 2 methods), eclipselink won't complain.
see https://issues.jboss.org/browse/SEAMSECURITY-91
It looks like this posting isn't going to draw much more response, so I will answer this in how I resolved it.
Ultimately I ended up breaking my runQuery() method into two methods: runQuery() that fetches the records and runQueryCount() that fetches the count of records without sort parameters. Each method has its own call to em.getCriteriaBuilder(). I have no idea what effect that has on the EntityManager, but the problem has not appeared since.
Also, the DAO object that has these methods used to be #ApplicationScoped. It now has no declared scope, so it is now constructed on demand from the various #RequestScoped and #ConversationScoped beans that use it. I don't know if this has any effect on the problem but since it has not appeared since I will use this as my code pattern from now on. Suggestions welcome.