I'm upgrading my code to use Spring Data Neo4j 6.1.2
With an earlier version, I was able to set the relationship direction as UNIDIRECTED:
#Relationship(type = RelatedEntity.TYPE, direction = Relationship.UNDIRECTED)
private Set<RelatedEntity> relatedEntities = new HashSet<>();
With this, I could get mutual relationships from either node. I assume this was internally ignoring the direction in the generated cypher code.
In the latest version, I only see INCOMING and OUTGOING. Is there a way I can replicate the previous behaviour or would I have to write custom queries?
There is no replacement for the UNDIRECTED relationship.
Spring Data Neo4j 6 requires you to specify the very same direction that you have in your data. If you need a bidirectional definition, e.g.
(user1)-[knows]->(user2)-[knows]->(user1)
you would have to add the relationship as INCOMING and OUTGOING to the entity.
Related
In My corda project state may evolve over time. I have made the state of type LinearState. Now I want to retrieve the history of a corda state that means, How it evolved over time. How can I see the evolution history of a particular state in Corda?
Particularly, I want to access the complete transaction chain of a state.
Of course without access to your code this answer will vary, but there's two pieces of documentation to be aware of here.
What you want to perform is essentially a vault query (depending on what information you're looking to get).
From the docs on LinearState:
Whenever a node records a new transaction, it also decides whether it should store each of the transaction’s output states in its vault. The default vault implementation makes the decision based on the following rules.
source: https://docs.corda.net/docs/corda-os/4.6/api-states.html#the-vault
That being said, to perform your vault query you would do it just like you would other states. Here's the docs on the vault query API : https://docs.corda.net/docs/corda-os/4.6/api-vault-query.html
If you have the linear Id you can do it from the node shell or using H2 and looking in places like the VAULT_LINEAR_STATES table.
If you want an example of querying in code take a look at the obligation cordapp that takes the linearID as a parameter to the flow.
// 1. Retrieve the IOU State from the vault using LinearStateQueryCriteria
List<UUID> listOfLinearIds = Arrays.asList(stateLinearId.getId());
QueryCriteria queryCriteria = new QueryCriteria.LinearStateQueryCriteria(null, listOfLinearIds);
Vault.Page results = getServiceHub().getVaultService().queryBy(IOUState.class, queryCriteria);
StateAndRef inputStateAndRefToSettle = (StateAndRef) results.getStates().get(0);
IOUState inputStateToSettle = (IOUState) ((StateAndRef) results.getStates().get(0)).getState().getData();
Source Code example here: https://github.com/corda/samples-java/blob/master/Advanced/obligation-cordapp/workflows/src/main/java/net/corda/samples/flows/IOUSettleFlow.java#L56-L61
I am working with Django and I am a bit lost on how to extract information from models (tables).
I have a table containing different information from various sensors. What I would like to know is if it is possible from the Django models to obtain for each sensor (each sensor has an identifier) the last row of data (using the timestamp column).
In sql it would be something like this, (probably the query is not correct but I think you can understand what I'm trying)
SELECT sensorID,timestamp,sensorField1,sensorField2
FROM sensorTable
GROUP BY sensorID
ORDER BY max(timestamp);
I have seen that the group_by() function exists and also lastest() but I don't get anything coherent and I'm also not clear if I'm choosing the best form.
Can anyone help me get started with this topic? I imagine it is very easy but it is a new world and it is difficult to start.
Greetings!
When you use a PostgreSQL database, you can make use of the .distinct(..) method [Django-doc] of the queryset where you add fields that determine on what these should be distinct.
So you can obtain the latest sensors in Django with:
SensorModel.objects.order_by('sensor', '-timestamp').distinct('sensor')
We thus order by sensor (which is required for a .distinct(..)), and then in case of a tie (so two times the same sensor), we order on the timestamp in descending order, hence we pick the latest SensorModel object for that sensor.
I'm trying to add a DynamoDBVersionAttribute to incorporate optimistic locking when accessing/updating items in a DynamoDB table. However, I'm unable to figure out how exactly to add the version attribute.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptimisticLocking.html seems to state that using it as an annotation in the class that creates the table is the way to go. However, our codebase is creating new tables in a format similar to this:
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);
List<AttributeDefinition> attributeDefinitions= new
ArrayList<AttributeDefinition>();
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("Id").withAttributeType("N"));
List<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
keySchema.add(new
KeySchemaElement().withAttributeName("Id").withKeyType(KeyType.HASH));
CreateTableRequest request = new CreateTableRequest()
.withTableName(tableName)
.withKeySchema(keySchema)
.withAttributeDefinitions(attributeDefinitions)
.withProvisionedThroughput(new ProvisionedThroughput()
.withReadCapacityUnits(5L)
.withWriteCapacityUnits(6L));
Table table = dynamoDB.createTable(request);
I'm not able to find out how to add the VersionAttribute through the Java code as described above. It's not an attribute definitions so unsure where it goes. Any guidance as to where I can add this VersionAttribute in the CreateTable request?
As far as I'm aware, the #DynamoDBVersionAttribute annotation for optimistic locking is only available for tables modeled specifically for DynamoDBMapper queries. Using DynamoDBMapper is not a terrible approach, since it effectively creates an ORM for CRUD operations on DynamoDB items.
But if your existing codebase can't make use of it, your next best bet is probably to use conditional writes to increment a version number if it's equal to what you expect it to be (i.e. roll your own optimistic locking). Unfortunately, you would need to include the increment / condition to every write you want to be optimistically locked.
Your code just creates a table, but then in order to use DynamoDBMapper to access that table, you need to create a class that represents it. For example if your table is called Users, you should create a class called Users, and use annotations to link it to the table.
You can keep your table creation code, but you need to create the DynamoDBMapper class. You can then do all of your loading, saving and querying using the DynamoDBMapper class.
When you have created the class, just give it a field called version and put the annotation on it, DynamoDBMapper will take care of the rest.
I have a cypher query that is supposed to return nodes and edges so that I can render a representation of my graph in a web app. I'm running it with the query method in Neo4jOperations.
start n=node({id}) match n-[support:SUPPORTED_BY|INTERPRETS*0..5]->(argument:ArgumentNode)
return argument, support
Earlier, I was using spring data neo4j 3.3.1 with an embedded database, and this query did a fine job of returning relationship proxies with start nodes and end nodes. I've upgraded to spring data neo4j 4.0.0 and switched to using a remote server, and now it returns woefully empty LinkedHashMaps.
This is the json response from the server:
{"commit":"http://localhost:7474/db/data/transaction/7/commit","results":[{"columns":["argument","support"],
"data":[
{"row":[{"buildVersion":-1},[]]},
{"row":[{"buildVersion":-1},[{}]]}
]}],"transaction":{"expires":"Mon, 12 Oct 2015 06:49:12 +0000"},"errors":[]}
I obtained this json by putting a breakpoint in DefaultRequest.java and executing EntityUtils.toString(response.getEntity()). The query is supposed to return two nodes which are related via an edge of type INTERPRETS. In the response you see [{}], which is where data about the edge should be.
How do I get a response with the data I need?
Disclaimer: this is not a definitive answer, just what I've pieced together so far.
You can use the queryForObjects method in Neo4jOperations, and make sure that your query returns a path. Example:
neo4jOperations.queryForObjects(ArgumentNode.class, "start n=node({id}) match path=n-[support:SUPPORTED_BY|INTERPRETS*0..5]->(argument:ArgumentNode) return path", params);
The POJOs that come back should be hooked together properly based on their relationship annotations. Now you can poke through them and manually build a set of edges that you can serialize. Not ideal, but workable.
Docs suggesting that you return a path:
From http://docs.spring.io/spring-data/data-neo4j/docs/4.0.0.RELEASE/reference/html/#_cypher_queries:
For the query methods that retrieve mapped objects, the recommended
query format is to return a path, which should ensure that known types
get mapped correctly and joined together with relationships as
appropriate.
Explanation of why queryForObjects helps:
Under the hood, there is a distinction between different types of queries. They have GraphModelQuery, RowModelQuery, and GraphRowModelQuery, each of which pass a different permutation of resultDataContents: ["row", "graph"] to the server. If you want data sufficient to reconstruct the graph, you need to make sure "graph" is in the list.
You can find this code inside ExecuteQueriesDelegate:
if (type != null && session.metaData().classInfo(type.getSimpleName()) != null) {
Query qry = new GraphModelQuery(cypher, parameters);
...
} else {
RowModelQuery qry = new RowModelQuery(cypher, parameters);
...
}
Using queryForObjects allows you to provide a type, and kicks things over into GraphModelQuery mode.
I am in the process of cleaning a database. These process involves changing the format of certain fields and getting rid of some data integrity issues.
I developed a program with Spring Data 1.1 to process the records in batches. The problem arises with 2 entities in a #OneToOne relationship. The record for Entity B does not exist although Entity A has a reference to it. My job is to clear the reference to Entity B if that is the case.
The question is: should I pre-process the data to clean this or can I adjust Spring Data or JPA settings to put null in the field if the Entity is not found?
It is "normal" - with this data - to have a FK in Entity A that does not exist in Entity B, so I want to handle this in my code and not have to pre-process the data with an additional step or other tool. The data will be arriving in batches so any pre-processing makes things more complicated for the user.
In summary, I want Spring Data to set the field to null and continue the process instead of getting an org.springframework.orm.jpa.JpaObjectRetrievalFailureException: Unable to find....
Perhaps you are looking for the #NotFound annotation?
Here is a post that talks about it.
I have had the same problem because my orm mapping has a wrong #One-to-one unidirectional relationship.