I have a method that builds and runs a Criteria query. The query does what I want it to, specifically it filters (and sorts) records based on user input.
Also, the query size is restricted to the number of records on the screen. This is important because the data table can be potentially very large.
However, if filters are applied, I want to count the number of records that would be returned if the query was not limited. So this means running two queries: one to fetch the records and then one to count the records that are in the overall set. It looks like this:
public List<Log> runQuery(TableQueryParameters tqp) {
// get the builder, query, and root
CriteriaBuilder builder = em.getCriteriaBuilder();
CriteriaQuery<Log> query = builder.createQuery(Log.class);
Root<Log> root = query.from(Log.class);
// build the requested filters
Predicate filter = null;
for (TableQueryParameters.FilterTerm ft : tqp.getFilterTerms()) {
// this section runs trough the user input and constructs the
// predicate
}
if (filter != null) query.where(filter);
// attach the requested ordering
List<Order> orders = new ArrayList<Order>();
for (TableQueryParameters.SortTerm st : tqp.getActiveSortTerms()) {
// this section constructs the Order objects
}
if (!orders.isEmpty()) query.orderBy(orders);
// run the query
TypedQuery<Log> typedQuery = em.createQuery(query);
typedQuery.setFirstResult((int) tqp.getStartRecord());
typedQuery.setMaxResults(tqp.getPageSize());
List<Log> list = typedQuery.getResultList();
// if we need the result size, fetch it now
if (tqp.isNeedResultSize()) {
CriteriaQuery<Long> countQuery = builder.createQuery(Long.class);
countQuery.select(builder.count(countQuery.from(Log.class)));
if (filter != null) countQuery.where(filter);
tqp.setResultSize(em.createQuery(countQuery).getSingleResult().intValue());
}
return list;
}
As a result, I call createQuery twice on the same CriteriaBuilder and I share the Predicate object (filter) between both of them. When I run the second query, I sometimes get the following message:
Exception [EclipseLink-6089] (Eclipse Persistence Services -
2.2.0.v20110202-r8913):
org.eclipse.persistence.exceptions.QueryException Exception
Description: The expression has not been initialized correctly. Only
a single ExpressionBuilder should be used for a query. For parallel
expressions, the query class must be provided to the ExpressionBuilder
constructor, and the query's ExpressionBuilder must always be on the
left side of the expression. Expression: [ Base
com.myqwip.database.Log] Query: ReportQuery(referenceClass=Log ) at
org.eclipse.persistence.exceptions.QueryException.noExpressionBuilderFound(QueryException.java:874)
at
org.eclipse.persistence.expressions.ExpressionBuilder.getDescriptor(ExpressionBuilder.java:195)
at
org.eclipse.persistence.internal.expressions.DataExpression.getMapping(DataExpression.java:214)
Can someone tell me why this error shows up intermittently, and what I should do to fix this?
Short answer to the question : Yes you can, but only sequentially.
In the method above, you start creating the first query, then start creating the second, the execute the second, then execute the first.
I had the exact same problem. I don't know why it's intermittent tough.
I other words, you start creating your first query, and before having finished it, you start creating and executing another.
Hibernate doesn't complain but eclipselink doesn't like it.
If you just start by the query count, execute it, and then create and execute the other query (what you've done by splitting it in 2 methods), eclipselink won't complain.
see https://issues.jboss.org/browse/SEAMSECURITY-91
It looks like this posting isn't going to draw much more response, so I will answer this in how I resolved it.
Ultimately I ended up breaking my runQuery() method into two methods: runQuery() that fetches the records and runQueryCount() that fetches the count of records without sort parameters. Each method has its own call to em.getCriteriaBuilder(). I have no idea what effect that has on the EntityManager, but the problem has not appeared since.
Also, the DAO object that has these methods used to be #ApplicationScoped. It now has no declared scope, so it is now constructed on demand from the various #RequestScoped and #ConversationScoped beans that use it. I don't know if this has any effect on the problem but since it has not appeared since I will use this as my code pattern from now on. Suggestions welcome.
Related
I am currently using SDN 4 and trys to do the following query:
#Query("MATCH (n:TNode:{0}) RETURN n")
Collection<TNode> getNodes(String type);
where each node has a common label "TNode" and an individual label type. However, it always returns syntax error. I'm sure the query is correct because it returns nodes using Neo4j web client.
Does the error occurs because SDN can not find nodes by label?
This is a limitation of Cypher, not SDN. Labels (or relationship types) as parameters are not supported. See this and related feature requests.
You can work around this using where clause and labels(n) function:
MATCH (n:TNode)
WHERE {0} in labels(n)
RETURN n
This comes with a caveat - it will go through all nodes matched by the MATCH clause. In your situation having a :TNode label might solve the issue, but generally having simple MATCH (n) would go through all nodes in the database, which will be very slow.
Other option would be to build the query manually and use org.springframework.data.neo4j.template.Neo4jOperations#queryForObjects to run the query:
String query = "MATCH (n:TNode:" + type + ") RETURN n"; // ugly, but works, beware of query injections etc..
Collection<TNode> nodes = neo4jOperations.queryForObjects(TNode.class, query, params);
From the API docs dynamo db does support pagination for scan and query operations. The catch here is to set the ExclusiveStartIndex of current request to the value of the LastEvaluatedIndex of previous request to get next set (logical page) of results.
I'm trying to implement the same but I'm using DynamoDBMapper, which seems to have lot more advantages like tight coupling with data models. So if I wanted to do the above, I'm assuming I would do something like below:
// Mapping of hashkey of the last item in previous query operation
Map<String, AttributeValue> lastHashKey = ..
DynamoDBQueryExpression expression = new DynamoDBQueryExpression();
...
expression.setExclusiveStartKey();
List<Table> nextPageResults = mapper.query(Table.class, expression);
I hope my above understanding is correct on paginating using DynamoDBMapper.
Secondly, how would I know that I've reached the end of results. From the docs if I use the following api:
QueryResult result = dynamoDBClient.query((QueryRequest) request);
boolean isEndOfResults = StringUtils.isEmpty(result.getLastEvaluatedKey());
Coming back to using DynamoDBMapper, how can I know if I've reached end of results in this case.
You have a couple different options with the DynamoDBMapper, depending on which way you want go.
query - returns a PaginatedQueryList
queryPage - returns a QueryResultPage
scan - returns a PaginatedScanList
scanPage - returns a ScanResultPage
The part here is understanding the difference between the methods, and what functionality their returned objects encapsulate.
I'll go over PaginatedScanList and ScanResultPage, but these methods/objects basically mirror each other.
The PaginatedScanList says the following, emphasis mine:
Implementation of the List interface that represents the results from a scan in AWS DynamoDB. Paginated results are loaded on demand when the user executes an operation that requires them. Some operations, such as size(), must fetch the entire list, but results are lazily fetched page by page when possible.
This says that results are loaded as you iterate through the list. When you get through the first page, the second page is automatically fetched with out you having to explicitly make another request. Lazy loading the results is the default method, but it can be overridden if you call the overloaded methods and supply a DynamoDBMapperConfig with a different DynamoDBMapperConfig.PaginationLoadingStrategy.
This is different from the ScanResultPage. You are given a page of results, and it is up to you to deal with the pagination yourself.
Here is quick code sample showing an example usage of both methods that I ran with a table of 5 items using DynamoDBLocal:
final DynamoDBMapper mapper = new DynamoDBMapper(client);
// Using 'PaginatedScanList'
final DynamoDBScanExpression paginatedScanListExpression = new DynamoDBScanExpression()
.withLimit(limit);
final PaginatedScanList<MyClass> paginatedList = mapper.scan(MyClass.class, paginatedScanListExpression);
paginatedList.forEach(System.out::println);
System.out.println();
// using 'ScanResultPage'
final DynamoDBScanExpression scanPageExpression = new DynamoDBScanExpression()
.withLimit(limit);
do {
ScanResultPage<MyClass> scanPage = mapper.scanPage(MyClass.class, scanPageExpression);
scanPage.getResults().forEach(System.out::println);
System.out.println("LastEvaluatedKey=" + scanPage.getLastEvaluatedKey());
scanPageExpression.setExclusiveStartKey(scanPage.getLastEvaluatedKey());
} while (scanPageExpression.getExclusiveStartKey() != null);
And the output:
MyClass{hash=2}
MyClass{hash=1}
MyClass{hash=3}
MyClass{hash=0}
MyClass{hash=4}
MyClass{hash=2}
MyClass{hash=1}
LastEvaluatedKey={hash={N: 1,}}
MyClass{hash=3}
MyClass{hash=0}
LastEvaluatedKey={hash={N: 0,}}
MyClass{hash=4}
LastEvaluatedKey=null
In the process of optimizing queries in my app I noticed something strange. In a given section of code I would get the object, make update some values and then save. In theory this should execute 2 queries. But in fact its executing 3 queries. 1 select query when I get the object and 2 when I save the object (Another select and then the update!). While removing one query may seem silly. In this particular method I am updating many objects so every query I save is 1 less hit on the db and should speed up the method.
Through inspection of the queries the two select queries are different the first gets many things and the select executed by the same is simple.
Here is the example code:
myobject = room.myobjects.get(id=myobject_id) # one query executed here
myobject.color = color
myobject.shape = shape
myobject.place = place
myobject.save() # two queries executed here
queries:
1) "SELECT `rooms_object`.`id`, `rooms_object`.`room_id`, ......FROM `rooms_object` WHERE (`rooms_object`.`id` = %s AND `rooms_object`.`room_id` = %s )"
2) "SELECT (1) AS `a` FROM `rooms_object` WHERE `rooms_object`.`id` = %s LIMIT 1"
3) "UPDATE ......this ones obvious"
I want the save method to recognize it already has the object in memory and it does not need to get it again....if that is even possible...
The second query is not actually pulling down the object again. It is doing an extremely fast "existence" check on the id before performing an UPDATE query. All that is returned from that query is a single 1, and the field is indexed, so it should be extremely efficient.
The reason they have chosen to design the ORM this way, is first they look at your object to see if it currently has an ID. If it does, they do the SELECT to make sure it really does still exist in the database. If it does, they perform the update. If somehow the record does not exist, they perform an INSERT. You can test this by creating the object, then deleting the row manually from your database, without django knowing. Then call save()
This is how it works to make sure django maintains consistency.
If it were a new object, you would only get a single INSERT query, because it knows the object has no id right now.
This is managed with force_update parameter in
Model.save([force_insert=False, force_update=False, using=DEFAULT_DB_ALIAS, update_fields=None])
Set force_update to True to disable existence checking ("SELECT (1) AS a FROM...").
https://docs.djangoproject.com/en/dev/ref/models/instances/
Helo there.
I am attempting to a execute a many-to-many get all query. To be clear, I am attepmting to get a collection within a collection to be pulled back. Ie, we will get a result set, but in that result set, there will be a collection of all objects linked to it via a foreign key. Now, to do this, I have a collection which I annotate thusly...
#ManyToMany
#JoinTable(name="QUICK_LAUNCH_DISTLIST",
joinColumns=#JoinColumn(name="QUICK_LAUNCH_ID"),
inverseJoinColumns=#JoinColumn(name="LIST_ID"))
private Collection<QuickLaunchDistlist> distributionLists;
Which seems to be just about text book...
I call a named query which looks like this...
#NamedQuery(name="getQuickLaunch", query = "SELECT q FROM QuickLaunch q")
Which is executed like so...
qlList = emf.createNamedQuery("getQuickLaunch").getResultList();
Every time I make this call, I get back the expected data in the first collection. But none of the collections seem to populate with it. To find out why, I looked at the sql being generated by the call... This is what I find...
I get this exception...
This is a FFDC log generated for the Default Resource Adapter from source:com.ibm.ws.rsadapter.jdbc.WSJdbcPreparedStatement.executeQuery
The exception caught:java.sql.SQLSyntaxErrorException: ORA-00904: "T1"."QL_DISTLIST_ID": invalid identifier
SQL Error Code is 904 SQL State is :42000
Along with this query...
SELECT t1.QL_DISTLIST_ID, t2.LIST_ID, t2.CREATE_DATE, t2.CREATE_USERID, t2.description, t2.flag, t2.MOD_DATE, t2.MOD_USERID, t2.ORGANIZATION_ID, t2.owner, t2.STATUS_ID, t1.MESSAGE_TYPE_ID, t1.MOD_DATE, t1.MOD_USERID, t1.QUICK_LAUNCH_ID FROM EPCD13.QUICK_LAUNCH_DISTLIST t0, EPCD13.QUICK_LAUNCH_DISTLIST t1, EPCD13.DISTRIBUTION_LIST t2 WHERE t0.QUICK_LAUNCH_ID = ? AND t0.LIST_ID = t1.QL_DISTLIST_ID AND t1.LIST_ID = t2.LIST_ID(+)
If you look at the first column it request's to pull back you will notice that it selects t1.QL_DISTLIST_ID... Problem is, I have no such named column any where in my db!?!?!? Why on earth is that column being called? How does JPA generate the queries that it calls? If I knew that, I might be a little closer to figuring out what went wrong here or what I did wrong. Any help would be greatly appreciated.
I am running JUnit tests using in memory HSQLDB. Let's say I have a method that inserts some values to the DB and I am checking if the method inserted the values correctly. Note that order of the insertion is not important.
#Test
public void should_insert_correctly() {
MyEntity[] expectedEntities = new MyEntity[2];
// init expected entities
Inserter out = new Inserter(session); // out: object under test
out.insert();
List list = session.createCriteria(MyEntity.class).list();
assertTrue(list.contains(expectedEntities[0]));
assertTrue(list.contains(expectedEntities[1]));
}
The problem is I cannot compare expected entities to actual ones because the expected's id and the actual's id are different. Since setId() of MyEntity is private (to prevent setting id explicitly), I cannot set all of the entities' id to 0 and compare like that.
How can I compare two result set regardless of their ids?
I found this more practical. Instead of fetching all results at once, I am fetching results according to the criterias and asserting they are not null.
public void should_insert_correctly() {
Inserter out = new Inserter(session); // out: object under test
out.insert();
Criteria criteria;
criteria = getCriteria(session, 0);
assertNotNull(criteria.uniqueResult());
criteria = getCriteria(session, 1);
assertNotNull(criteria.uniqueResult());
}
private Criteria getCriteria(Session session, int i) {
Criteria criteria = session.createCriteria(MyEntity.class);
criteria.add(Restrictions.eq("x", expectedX[i]));
criteria.add(Restrictions.eq("y", expectedY[i]));
return criteria;
}
A stateful entity should not override equals -- that is, entities should be compared for equality by reference identity -- so List.contains will not work as you want.
What I do is use reflection to compare the fields of the original and reloaded entities. The function that walks over the fields of the objects ignores transient fields and those annotated as #Transient.
I don't find I need to ignore the id. When the object is first flushed to the database, Hibernate allocates it an id. When it is reloaded, the object will have the same id.
The flaw in your test is that you have not set transaction boundaries. You need to save the objects in one transaction. When you commit that transaction, Hibernate will flush the objects to the database and allocate their ids. Then in another transaction load the entities back from the database. You will get another set of objects that should have the same ids and persistent (i.e. non-transient) state.
I would try to implement Object.equals(Object) method in your MyEntity class.
List.contains(Object) uses Object.equals(Object) (Source: Java 6 API) to determine if an Object is in this list.
The method session.createCriteria(MyEntity.class).list(); returns a list of new instances with the values you inserted (hopefully).
So you need to compare the values. This is easily done via the implementation of Object.equals(Object).
Clarification edit:
You could ignore the ids in your equals method, so that the comparison only cares about "real values".
YAE (Yet Another Edit):
I recommend reading this article about the equals() method: Angelika Langer: Secrets Of Equal. It explains all background information very well.