Below is the response from my spatial view query in Couchbase by providing bounding box parameters:
{
"rows":[
{
"geometry":{
"type":"Point",
"coordinates":[
-71.10364,
42.381411
]
},
"value":{
"location":{
"type":"Point",
"coordinates":[
-71.10364,
42.381411
]
},
"name":"test",
"visibility":"public",
},
"id":"test",
"key":[
[
-71.10364,
-71.10364
],
[
42.381411,
42.381411
]
]
}
]
}
and here is my spatial view query:-
function (doc, meta) {
if (doc.type == "folder" && doc.location && doc.location.type) {
if(doc.location.type=='Point'){
var visibility = doc.enabled === true ? 'public' : 'private';
emit(doc.location, {
name:doc.name,
folder_id:doc.folder_id,
location: doc.location,
visibility:visibility
});
}
}
}
but the JSON response contains unwanted data, so i am wondering how can i remove geometry and key parameter from json response.
Also query returns first 10 records, is there any way so i can set limit and skip parameters so query return all data instead first 10.
To answer the 2nd half of your question (please post two separate questions next time): yes, views support pagination. You can set the number of results. you can ask for x results per page and for different pages.
See this: http://blog.couchbase.com/pagination-couchbase
And also: dev-views only work on part of your bucket. Publish them to get results that corresponds to the entire data.
You can't remove the geometry and key - both are part of the result. If you don't want to use them when simply don't do anything with them.
Related
According to the document of Apache Druid about native query
Timeseries queries normally fill empty interior time buckets with zeroes.
For example, if you issue a "day" granularity timeseries query for the interval 2012-01-01/2012-01-04, and no data exists for 2012-01-02, you will receive:
[
{
"timestamp": "2012-01-01T00:00:00.000Z",
"result": { "sample_name1": <some_value> }
},
{
"timestamp": "2012-01-02T00:00:00.000Z",
"result": { "sample_name1": 0 }
},
{
"timestamp": "2012-01-03T00:00:00.000Z",
"result": { "sample_name1": <some_value> }
}
]
This could be controlled by the value of context flag "skipEmptyBuckets", and the default value is false (do not skip the empty bucket by zero-filling).
However, when querying timeseries data with Druid SQL, the default behavior is to skip all empty buckets. I have to set query context explicitly to get the results I want.
"context": {
"skipEmptyBuckets": true
}
This troubles me a lot because I need zero-filling to show all buckets in Apache Superset's timeseries charts. But there's no way to set the query context.
As far as I know, the SQL statement internally is translated to native query, so why is the inconsistency?
I wanted to get data and show it in UI. Here's how i write to get the "movies" data.
let movies= yield this.store.findAll('movie');
And I log the "movies". As the picture below shows that there's no data for "photos".
Here's the network:
I'm getting data back from hasura like this:
{
"data": {
"movies": [
{
"id": "584db434-5caa-475e-b3ec-e98e742f0030",
"movieid": "abc123",
"description": "Penquins dancing in antactica",
"photos": [
{
"id": "c4d2833a-4896-42b0-ae8b-0ab9fe71d1d4"
},
{
"id": "e04697e3-21fe-4f0e-8012-443f26293340"
}
]
}
]
}
}
But Ember.js can't read and render the relationship data (photos). Is it the "photos" data should be like this?
"photos": [c4d2833a-4896-42b0-ae8b-0ab9fe71d1d4, e04697e3-21fe-4f0e-8012-443f26293340]
How can I convert it in Ember? or in Hasura?
Thanks for updating your question!
Since you're using ember-data, you'll need a custom adapter and serializer to form your data into the format that ember-data is expecting (since there are infinite numbers of ways APIs decide how to structure data).
More information on that can be found here:
https://guides.emberjs.com/release/models/customizing-adapters/
and here: https://guides.emberjs.com/release/models/customizing-serializers/
Your data is fairly well structured already, so conversion should hopefully go well. Comment back if you have issues <3
I have an AppSync pipeline resolver. The first function queries an ElasticSearch database for the DynamoDB keys. The second function queries DynamoDB using the provided keys. This was all working well until I ran into the 1 MB limit of AppSync. Since most of the data is in a few attributes/columns I don't need, I want to limit the results to just the attributes I need.
I tried adding AttributesToGet and ProjectionExpression (from here) but both gave errors like:
{
"data": {
"getItems": null
},
"errors": [
{
"path": [
"getItems"
],
"data": null,
"errorType": "MappingTemplate",
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "Unsupported element '$[tables][dev-table-name][projectionExpression]'."
}
]
}
My DynamoDB function request mapping template looks like (returns results as long as data is less than 1 MB):
#set($ids = [])
#foreach($pResult in ${ctx.prev.result})
#set($map = {})
$util.qr($map.put("id", $util.dynamodb.toString($pResult.id)))
$util.qr($map.put("ouId", $util.dynamodb.toString($pResult.ouId)))
$util.qr($ids.add($map))
#end
{
"version" : "2018-05-29",
"operation" : "BatchGetItem",
"tables" : {
"dev-table-name": {
"keys": $util.toJson($ids),
"consistentRead": false
}
}
}
I contacted the AWS people who confirmed that ProjectionExpression is not supported currently and that it will be a while before they will get to it.
Instead, I created a lambda to pull the data from DynamoDB.
To limit the results form DynamoDB I used $ctx.info.selectionSetList in AppSync to get the list of requested columns, then used the list to specify the data to pull from DynamoDB. I needed to get multiple results, maintaining order, so I used BatchGetItem, then merged the results with the original list of IDs using LINQ (which put the DynamoDB results back in the correct order since BatchGetItem in C# does not preserve sort order like the AppSync version does).
Because I was using C# with a number of libraries, the cold start time was a little long, so I used Lambda Layers pre-JITed to Linux which allowed us to get the cold start time down from ~1.8 seconds to ~1 second (when using 1024 GB of RAM for the Lambda).
AppSync doesn't support projection but you can explicitly define what fields to return in the response template instead of returning the entire result set.
{
"id": "$ctx.result.get('id')",
"name": "$ctx.result.get('name')",
...
}
declareUpdate();
//get Docs
myDoc = cts.doc("/heal/scripts/Test.json").toObject();
//add Data
myDoc.prescribedPlayer =
[
{
"default": "http://www.youtube.com/watch?vu003dhYB0mn5zh2c"
}
]
//persist
xdmp.documentInsert("/heal/scripts/Test.json",myDoc,null,"scripts")
You're looking to add a new JSON property. You can do that using a REST Client API request, sending a PATCH command. Use an insert instruction in the patch.
See the note in Specifying Position in JSON, which indicates that
You cannot use last-child to insert a property as an immediate child of the root node of a document. Use before or after instead. For details, see Limitations of JSON Path Expressions.
Instead, your patch will look something like:
{
"insert": {
"context": "/topProperty",
"position": "after",
"content":
[
{
"default": "http://www.youtube.com/watch?vu003dhYB0mn5zh2c"
}
],
}
}
where topProperty is a JSON property that is part of the root node of the JavaScript object you want to update.
If that approach is problematic (for instance, if there is no topProperty that's reliably available), you could also do a sequence of operations:
retrieve the document
edit the content in Python
update the document in the database
With this approach, there is the possibility that some other process may update the document while you're working on it. You can either rely on optimistic locking or a multi-statement transaction to work around that, depending on the potential consequences of someone else doing a write.
Hey #Ankur Please check below python method,
def PartialUpdateData(self,filename, content, context):
self.querystring = {"uri": "/" + self.collection + "/" + filename}
url = self.baseUri
self.header = {'Content-Type': "application/json"}
mydata = {
"patch":[{ "insert": {
"context": context,
"position": "before",
"content": content
}}]}
resp = requests.patch(url + "/documents", data=json.dumps(mydata),
headers=self.header, auth=self.auth, params=self.querystring)
return resp.content
I hope this can solve your problem.
Is there a better way to return the record(s) after DS.Store#pushPayload is called? This is what I'm doing...
var payload = { id: 1, title: "Example" }
store.pushPayload('post', payload);
return store.getById('post', payload.id);
But, with regular DS.Store#push you get the inserted record returned. The only difference between the two, from what I can tell, is that DS.Store#pushPayload serializes the payload data with the correct serializers.
DS.Store#pushPayload is able to take an array of items, not just one, and may contain side-loaded data. It processes a full payload and expects root keys in the payload:
{
"posts": [{
"id": 1,
"title": "title",
"comments": [1]
}],
"comments": [
//.. and so on ...
]
}
DS.Store#push expects a single record which has been normalized and contains no side loaded data (notice there is no root key):
{
"id": 1,
"title": "title",
"comments": [1]
}
For this reason, it makes sense for push to return the record, but for pushPayload to return nothing.
When you use pushPayload, a second lookup of store.find('post', 1) (or store.getById('post', 1)) is the way to go, I don't believe there is a better way.
As of this PR pushPayload can now return an array of all the records pushed into the store, once the 'ds-pushpayload-return' feature flag has been enabled.
At the moment, this feature isn't available in a standard or beta release-- you'll have to use
"ember-data": "emberjs/data#master",
(i.e. Canary) in your package.json in order to access it. I'm not sure when the feature will be generally available.