Predicate query nested fetch - jpa-2.0

I have list of predicates and use them as below.
criteriaQuery = criteriaQuery.where(builder.and(toArray(predicates, Predicate.class)));
I use the fetch for lazy as below for Person entity.
Root<Employment> root = criteriaQuery.from(type);
root.fetch(Employment_.person);
However, I need to access education entity which is mapped in Person.
I tried as below but it doesn't work.
root.fetch(Employment_.person, Person_.education);
In other words I need to do a nested fetch somehow, any suggestions?
Because at the moment if I tried to access getEmployment().getPerson().getEducation() it goes to lazy exception

Related

How does the #Relationship annotation in Spring Data Neo4j order the results?

I would like to order the returned outgoing nodes in a specific way, based on a relationship property. Can this customized at all? I can't even find anything in the docs about what the default ordering is.
#Relationship("CREATED")
private List<Node> nodes;
There is no default sorting in Spring Data Neo4j.
Also there is no default sorting in the database.
You should not make any assumptions if there is no explicit ordering in the query.
If you want to go the path in Cypher you have to define a custom query.
For this, I am referring to the movie graph (:play movies) to populate the database and have some example data.
If you would call:
MATCH (m:Movie{title:'The Matrix'}) OPTIONAL MATCH (m)<-[:ACTED_IN]-(p) return m, collect(p)
The actors will have a random order.
But if you add ordering by e.g. the actor's name to this, you get the desired result.
MATCH (m:Movie{title:'The Matrix'}) OPTIONAL MATCH (m)<-[:ACTED_IN]-(p) WITH m, p ORDER BY p.name return m, collect(p)

How to populate table based on the condition of another table's column value in Django

I am kinda new in django and I am stuck.
I am building and app to store equipment registry. I have done a model for equipment list and i have a status values as "available", "booked", "maintenance"
I also have a model for all equipment that are not available. not in my html in the "not available registry" i want to show only details of equipment in the list that are marked as "booked" and "maintenance"
There are three ways of doing this.
To filter out objects of the model that are marked as "booked" or "maintenance" you can use complex lookups with Q objects. It allows you to filter objects with OR statement. Here you need to find objects that have status set to "booked" OR to "maintenance". The query should look like this:
from django.db.models import Q
Equipment.objects.filter(Q(status='booked') | Q(status='maintetnance'))
Second way of doing this is by using __in statement to filter objects that you need:
not_available_status = ['booked', 'maintenance']
Equipment.objects.filter(status__in=not_available_status)
And final way is to exclude objects that you don't need:
Equipment.objects.exclude(status='available')

Is there a way how to address nested properties in AWS DynamoDB for purpose of documentClient.query() call?

I am currently testing how to design a query from AWS.DynamoDB.DocumentClient query() call that takes params: DocumentClient.QueryInput, which is used for retrieving data collection from a table in DynamoDB.
Query seems to be simple and working fine while working with indexes of type String or Number only. What I am not able to make is an query, that will use a valid index and filter upon an attribute that is nested (see my data structure please).
I am using FilterExpression, where can be defined logic for filtering - and that seems to be working fine in all cases except cases when trying to do filtering on nested attribute.
Current parameters, I am feeding query with
parameters {
TableName: 'myTable',
ProjectionExpression: 'HashKey, RangeKey, Artist ,#SpecialStatus, Message, Track, Statistics'
ExpressionAttributeNames: { '#SpecialStatus': 'Status' },
IndexName: 'Artist-index',
KeyConditionExpression: 'Artist = :ArtistName',
ExpressionAttributeValues: {
':ArtistName': 'BlindGuadian',
':Track': 'Mirror Mirror'
},
FilterExpression: 'Track = :Track'
}
Data structure in DynamoDB's table:
{
'Artist' : 'Blind Guardian',
..
'Track': 'Mirror Mirror',
'Statistics' : [
{
'Sales': 42,
'WrittenBy' : 'Kursch'
}
]
}
Lets assume we want to filter out all entries from DB, by using Artist in KeyConditionExpression. We can achieve this by feeding Artist with :ArtistName. Now the question, how to retrieve records that I can filter upon WritenBy, which is nested in Statistics?
To best of my knowledge, we are not able to use any other type but String, Number or Binary for purpose of making secondary indexes. I've been experimenting with Secondary Indexes and Sorting Keys as well but without luck.
I've tried documentClient.scan(), same story. Still no luck with accessing nested attributes in List (FilterExpression just won't accept it).
I am aware of possibility to filter result on "application" side, once the records are retrieved (by Artists for instance) but I am interested to filter it out in FilterExpression
If I understand your problem correctly, you'd like to create a query that filters on the value of a complex attribute (in this case, a list of objects).
You can filter on the contents of a list by indexing into the list:
var params = {
TableName: "myTable",
FilterExpression: "Statistics[0].WrittenBy = :writtenBy",
ExpressionAttributeValues: {
":writtenBy": 'Kursch'
}
};
Of course, if you don't know the specific index, this wont really help you.
Alternatively, you could use the CONTAINS function to test if the object exists in your list. The CONTAINS function will require all the attributes in the object to match the condition. In this case, you'd need to provide Sales and WrittenBy, which probably doesn't solve your problem here.
The shape of your data is making your access pattern difficult to implement, but that is often the case with DDB. You are asking DDB to support a query of a list of objects, where the object has a specific attribute with a specific value. As you've seen, this is quote tricky to do. As you know, getting the data model to correctly support your access patterns is critical to your success with DDB. It can also be difficult to get right!
A couple of ideas that would make your access pattern easier to implement:
Move WrittenBy out of the complex attribute and put it alongside the other top-level attributes. This would allow you to use a simple FilterExpression on the WrittenBy attribute.
If the WrittenBy attribute must stay within the Statistics list, make it stand alone (e.g. [{writtenBy: Kursch}, {Sales: 42},...]). This way, you'd be able to use the CONTAINS keyword in your search.
Create a secondary index with the WrittenBy field in either the PK or SK (whichever makes sense for your data model and access patterns).

ndb verify entity uniqueness in transaction

I've been trying to create entities with a property which should be unique or None something similar to:
class Thing(ndb.Model):
something = ndb.StringProperty()
unique_value = ndb.StringProperty()
Since ndb has no way to specify that a property should be unique it is only natural that I do this manually like this:
def validate_unique(the_thing):
if the_thing.unique_value and Thing.query(Thing.unique_value == the_thing.unique_value).get():
raise NotUniqueException
This works like a charm until I want to do this in an ndb transaction which I use for creating/updating entities. Like:
#ndb.transactional
def create(the_thing):
validate_unique(the_thing)
the_thing.put()
However ndb seems to only allow ancestor queries, the problem is my model does not have an ancestor/parent. I could do the following to prevent this error from popping up:
#ndb.non_transactional
def validate_unique(the_thing):
...
This feels a bit out of place, declaring something to be a transaction and then having one (important) part being done outside of the transaction. I'd like to know if this is the way to go or if there is a (better) alternative.
Also some explanation as to why ndb only allows ancestor queries would be nice.
Since your uniqueness check involves a (global) query it means it's subject to the datastore's eventual consistency, meaning it won't work as the query might not detect freshly created entities.
One option would be to switch to an ancestor query, if your expected usage allows you to use such data architecture, (or some other strongly consistent method) - more details in the same article.
Another option is to use an additional piece of data as a temporary cache, in which you'd store a list of all newly created entities for "a while" (giving them ample time to become visible in the global query) which you'd check in validate_unique() in addition to those from the query result. This would allow you to make the query outside the transaction and only enter the transaction if uniqueness is still possible, but the ultimate result is the manual check of the cache, inside the transaction (i.e. no query inside the transaction).
A 3rd option exists (with some extra storage consumption as the price), based on the datastore's enforcement of unique entity IDs for a certain entity model with the same parent (or no parent at all). You could have a model like this:
class Unique(ndb.Model): # will use the unique values as specified entity IDs!
something = ndb.BooleanProperty(default=False)
which you'd use like this (the example uses a Unique parent key, which allows re-using the model for multiple properties with unique values, you can drop the parent altogether if you don't need it):
#ndb.transactional
def create(the_thing):
if the_thing.unique_value:
parent_key = get_unique_parent_key()
exists = Unique.get_by_id(the_thing.unique_value, parent=parent_key)
if exists:
raise NotUniqueException
Unique(id=the_thing.unique_value, parent=parent_key).put()
the_thing.put()
def get_unique_parent_key():
parent_id = 'the_thing_unique_value'
parent_key = memcache.get(parent_id)
if not parent_key:
parent = Unique.get_by_id(parent_id)
if not parent:
parent = Unique(id=parent_id)
parent.put()
parent_key = parent.key
memcache.set(parent_id, parent_key)
return parent_key

Get a list of entities that match ALL the supplied ids for a related entity field in a many-to-many association

I am trying to get a list of users from the database that have ALL the tags in a criteria.
The User entity has a many-to-many association to a Tag entity.
The or version where just one of the tags have to match is working using the following code
$tagIds = array(29,30);
$this->createQueryBuilder('u')
->select('u','t')
->leftJoin('u.tags','t')
->where("t IN(:tagIds)")
->setParameter("tagIds",$tagIds)
;
Can anybody help me with getting it to work so ALL tag ids must match ?
Keep in mind this is a query to get a list of users, not just one user , so i guess every user must be checked to see if they match all the supplied tag ids.
I have tried a bunch of queries but not having any luck so far...
simple bruteforce query:
$tagIds = array(29,30);
$qb = $this->createQueryBuilder('u');
$qb
->select('u')
;
foreach($tagIds as $idx => $tagId)
{
$joinAlias = "t{$idx}";
$qb->leftJoin('u.tags', $joinAlias)
->addSelect($joinAlias)
->andWhere("{$joinAlias}.id = $tagId AND $joinAlias IS NOT NULL")
;
}
this is really bruteforce and costly query, you join each tag as a separate join, if you have lots of users and tags, this will take ages to execute.
since database is the bottleneck of your application, you should make a simple query to the database and then parse the data in your application, so you should use your query and then check which users have those 2 tags in their collections.
Ok... after a lot of searching this seems to work for me :
$tagIds = array(29,30);
$this->createQueryBuilder('u')
->select('u','t')
->leftJoin('u.tags','t')
->where("t IN(:tagIds)")
->groupBy('u.id')
->having('COUNT(DISTINCT t.id) = ' . count($tagIds))
->setParameter("tagIds",$tagIds)
;