i'm trying to play around with GYP and got stucked with defining "default variable"
have 2 files(one main, and one expected to store common data, included to main:
1) v_common.gypi:
{
'variables': {
'mymodule%': "blblblb",
'mymoduleLibs' : "<(mymodule)/Libs",
},
'target_defaults': {
},
}
2) mymodule.gyp
{
'variables':{
},
'includes': [
'v_common.gypi',
], # includes
'targets': [
{
'target_name': 'myModule',
'type': 'none',
'actions' : [
{
'action_name': 'create_libs_folder',
'inputs': ['one_file'],
'outputs':['blabla'],
'action': ['mkdir', '<(mymoduleLibs)'],
}
]
},
], # targets
}
per my expectations:
mymodule should get value of "blblblb", (as far as it wasn't defined previously anywhere),
then I should be able to use it for compute value of mymoduleLibs
and after all mymoduleLibs should be usable in mymodule.gyp
but, i just getting error that mymodule is "Undefined variable". If I do exact definition of mymodyle like in example below(withot percent sign), everything works fine. :
'variables': {
'mymodule': "blblblb",
'mymoduleLibs' : "<(mymodule)/Libs",
}
any ideas?
i've found issue. it's described here https://groups.google.com/forum/?fromgroups#!topicsearchin/gyp-developer/default/gyp-developer/1EWXAXe-qWs
correct workaroud is to define default variables in sub-dict 'variables':{...}, so they will be evaluated before expanding other variables, like below:
{
'variables': {
'variables': {
'mymodule%': "blblblb",
},
'mymoduleLibs' : "<(mymodule)/Libs",
},
'target_defaults': {
},
}
Related
I'm trying to do a regex match inside an aggregation pipeline $lookup
So lets assume the following query:
$lookup: {
from: 'some-collection',
let: {
someIds: '$someIds'
},
pipeline: [
{
$match: {
$expr: {
$and: [
{
$in: ['$someId', '$$someIds']
},
{
$not: {
$eq: ['$status', 'archived']
}
}
]
}
}
}
]
}
This all works great, i can match on multiple conditions, and it works.
However if i want to add another condition using an array of regex i can't get it to work
$lookup: {
from: 'some-collection',
let: {
someIds: '$someIds'
},
pipeline: [
{
$match: {
$expr: {
$and: [
{
$in: ['$someId', '$$someIds']
},
{
$not: {
$eq: ['$status', 'archived']
}
},
{
$in: ['$some-type', [/type1/, /type2/]]
}
]
}
}
}
]
}
Why does this not work? as i understand it from the documentation i should be able to use regex this way inside an $in operator, and i can confirm that it works, since we use it elsewhere. However nested within a $lookuppipeline it does not.
Is this a bug or am i overlooking something? Is there another way i can do this kind of regex match?
Evidently, the problem appears to be that i was attempting to regex match inside the $expr operator, im unsure as to why it does not work, and i can't find anything within the documentation.
But by moving it to a seperate match within the pipeline it worked.
$lookup: {
from: 'some-collection',
let: {
someIds: '$someIds'
},
pipeline: [
{
$match: {
$expr: {
$and: [
{
$in: ['$someId', '$$someIds']
},
{
$not: {
$eq: ['$status', 'archived']
}
}
]
}
}
},
{
$match: {
some-type: {
$in: [/type1/, /type2/]
}
}
}
]
}
If anyone can elaborate on why this is the case feel free
The schema:
type User {
id: ID!
createdCurricula: [Curriculum]
}
type Curriculum {
id: ID!
title: String!
creator: User!
}
The resolver to query all curricula of a given user:
{
"version" : "2017-02-28",
"operation" : "Query",
"query" : {
## Provide a query expression. **
"expression": "userId = :userId",
"expressionValues" : {
":userId" : {
"S" : "${context.source.id}"
}
}
},
"index": "userIdIndex",
"limit": #if(${context.arguments.limit}) ${context.arguments.limit} #else 20 #end,
"nextToken": #if(${context.arguments.nextToken}) "${context.arguments.nextToken}" #else null #end
}
The response map:
{
"items": $util.toJson($context.result.items),
"nextToken": #if(${context.result.nextToken}) "${context.result.nextToken}" #else null #end
}
The query:
query {
getUser(id: "0b6af629-6009-4f4d-a52f-67aef7b42f43") {
id
createdCurricula {
title
}
}
}
The error:
{
"data": {
"getUser": {
"id": "0b6af629-6009-4f4d-a52f-67aef7b42f43",
"createdCurricula": null
}
},
"errors": [
{
"path": [
"getUser",
"createdCurricula"
],
"locations": null,
"message": "Can't resolve value (/getUser/createdCurricula) : type mismatch error, expected type LIST"
}
]
}
The CurriculumTable has a global secondary index titled userIdIndex, which has userId as the partition key.
If I change the response map to this:
$util.toJson($context.result.items)
The output is the following:
{
"data": {
"getUser": {
"id": "0b6af629-6009-4f4d-a52f-67aef7b42f43",
"createdCurricula": null
}
},
"errors": [
{
"path": [
"getUser",
"createdCurricula"
],
"errorType": "MappingTemplate",
"locations": [
{
"line": 4,
"column": 5
}
],
"message": "Unable to convert \n{\n [{\"id\":\"87897987\",\"title\":\"Test Curriculum\",\"userId\":\"0b6af629-6009-4f4d-a52f-67aef7b42f43\"}],\n} to class java.lang.Object."
}
]
}
If I take that string and run it through a console.log in my frontend app, I get:
{
[{"id":"2","userId":"0b6af629-6009-4f4d-a52f-67aef7b42f43"},{"id":"1","userId":"0b6af629-6009-4f4d-a52f-67aef7b42f43"}]
}
That's clearly an object. How do I make it... not an object, so that AppSync properly reads it as a list?
SOLUTION
My response map had a set of curly braces around it. I'm pretty sure that was placed there in the generator by Amazon. Removing them fixed it.
I think I'm not seeing the complete view of your schema, I was expecting something like:
schema {
query: Query
}
Where Query is RootQuery, in fact you didn't share us your Query definition. Assuming you have the right Query definition. The main problem is in your response template.
> "items": $util.toJson($context.result.items)
This means that you are passing a collection named: *"items"* to Graphql query engine. And you are referring this collection as "createdCurricula". So solve this issue your response-mapping-template is the right place to fix. How? just replace the above line with the following.
"createdCurricula": $util.toJson($context.result.items),
Please the main thing to note here is, the mapping template is a bridge between your datasources and qraphql, feel free to make any computation, or name mapping but don't forget that object names in that response json are the one should match in schema/query definition.
Thanks.
Musema
change to result type to $util.toJson($ctx.result.data.posts)
The exception msg says that it expected a type list.
Looking at:
{
[{"id":"2","userId":"0b6af629-6009-4f4d-a52f-67aef7b42f43"},{"id":"1","userId":"0b6af629-6009-4f4d-a52f-67aef7b42f43"}]
}
I don't see that createdCurricula is a LIST.
What is currently in DDB is:
"id": "0b6af629-6009-4f4d-a52f-67aef7b42f43",
"createdCurricula": null
I have a geojson file containing a list of locations each with a longitude, latitude and timestamp. Note the longitudes and latitudes are multiplied by 10000000.
{
"locations" : [ {
"timestampMs" : "1461820561530",
"latitudeE7" : -378107308,
"longitudeE7" : 1449654070,
"accuracy" : 35,
"junk_i_want_to_save_but_ignore" : [ { .. } ]
}, {
"timestampMs" : "1461820455813",
"latitudeE7" : -378107279,
"longitudeE7" : 1449673809,
"accuracy" : 33
}, {
"timestampMs" : "1461820281089",
"latitudeE7" : -378105184,
"longitudeE7" : 1449254023,
"accuracy" : 35
}, {
"timestampMs" : "1461820155814",
"latitudeE7" : -378177434,
"longitudeE7" : 1429653949,
"accuracy" : 34
}
..
Many of these locations will be the same physical location (e.g. the user's home) but obviously the longitude and latitudes may not be exactly the same.
I would like to use Elastic Search and it's Geo functionality to produce a ranked list of most common locations where locations are deemed to be the same if they are within, say, 100m of each other?
For each common location I'd also like the list of all timestamps they were at that location if possible!
I'd very much appreciate a sample query to get me started!
Many thanks in advance.
In order to make it work you need to modify your mapping like this:
PUT /locations
{
"mappings": {
"location": {
"properties": {
"location": {
"type": "geo_point"
},
"timestampMs": {
"type": "long"
},
"accuracy": {
"type": "long"
}
}
}
}
}
Then, when you index your documents, you need to divide the latitude and longitude by 10000000, and index like this:
PUT /locations/location/1
{
"timestampMs": "1461820561530",
"location": {
"lat": -37.8103308,
"lon": 14.4967407
},
"accuracy": 35
}
Finally, your search query below...
POST /locations/location/_search
{
"aggregations": {
"zoomedInView": {
"filter": {
"geo_bounding_box": {
"location": {
"top_left": "-37, 14",
"bottom_right": "-38, 15"
}
}
},
"aggregations": {
"zoom1": {
"geohash_grid": {
"field": "location",
"precision": 6
},
"aggs": {
"ts": {
"date_histogram": {
"field": "timestampMs",
"interval": "15m",
"format": "DDD yyyy-MM-dd HH:mm"
}
}
}
}
}
}
}
}
...will yield the following result:
{
"aggregations": {
"zoomedInView": {
"doc_count": 1,
"zoom1": {
"buckets": [
{
"key": "k362cu",
"doc_count": 1,
"ts": {
"buckets": [
{
"key_as_string": "Thu 2016-04-28 05:15",
"key": 1461820500000,
"doc_count": 1
}
]
}
}
]
}
}
}
}
UPDATE
According to our discussion, here is a solution that could work for you. Using Logstash, you can call your API and retrieve the big JSON document (using the http_poller input), extract/transform all locations and sink them to Elasticsearch (with the elasticsearch output) very easily.
Here is how it goes in order to format each event as depicted in my initial answer.
Using http_poller you can retrieve the JSON locations (note that I've set the polling interval to 1 day, but you can change that to some other value, or simply run Logstash manually each time you want to retrieve the locations)
Then we split the locations array into individual events
Then we divide the latitude/longitude fields by 10,000,000 to get proper coordinates
We also need to clean it up a bit by moving and removing some fields
Finally, we just send each event to Elasticsearch
Logstash configuration locations.conf:
input {
http_poller {
urls => {
get_locations => {
method => get
url => "http://your_api.com/locations.json"
headers => {
Accept => "application/json"
}
}
}
request_timeout => 60
interval => 86400000
codec => "json"
}
}
filter {
split {
field => "locations"
}
ruby {
code => "
event['location'] = {
'lat' => event['locations']['latitudeE7'] / 10000000.0,
'lon' => event['locations']['longitudeE7'] / 10000000.0
}
"
}
mutate {
add_field => {
"timestampMs" => "%{[locations][timestampMs]}"
"accuracy" => "%{[locations][accuracy]}"
"junk_i_want_to_save_but_ignore" => "%{[locations][junk_i_want_to_save_but_ignore]}"
}
remove_field => [
"locations", "#timestamp", "#version"
]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "locations"
document_type => "location"
}
}
You can then run with the following command:
bin/logstash -f locations.conf
When that has run, you can launch your search query and you should get what you expect.
I've installed pljson 1.05 in Oracle Xe 11g and written a PLSQL function to extract values from the return from Amazon AWS describe-instances.
Trying to obtain the values for top level items such as reservation ID work but i am unable to get values nested within lower levels of the json.
e.g. this example works (using the cutdown AWS JSON inline
DECLARE
reservations JSON_LIST;
l_tempobj JSON;
instance JSON;
L_id VARCHAR2(20);
BEGIN
obj:= json('{
"Reservations": [
{
"ReservationId": "r-5a33ea1a",
"Instances": [
{
"State": {
"Name": "stopped"
},
"InstanceId": "i-7e02503e"
}
]
},
{
"ReservationId": "r-e5930ea5",
"Instances": [
{
"State": {
"Name": "running"
},
"InstanceId": "i-77859692"
}
]
}
]
}');
reservations := json_list(obj.get('Reservations'));
l_tempobj := json(reservations);
DBMS_OUTPUT.PUT_LINE('============');
FOR i IN 1 .. l_tempobj.count
LOOP
DBMS_OUTPUT.PUT_LINE('------------');
instance := json(l_tempobj.get(i));
instance.print;
l_id := json_ext.get_string(instance, 'ReservationId');
DBMS_OUTPUT.PUT_LINE(i||'] Instance:'||l_id);
END LOOP;
END;
returning
============
------------
{
"ReservationId" : "r-5a33ea1a",
"Instances" : [{
"State" : {
"Name" : "stopped"
},
"InstanceId" : "i-7e02503e"
}]
}
1] Instance:r-5a33ea1a
------------
{
"ReservationId" : "r-e5930ea5",
"Instances" : [{
"State" : {
"Name" : "running"
},
"InstanceId" : "i-77859692"
}]
}
2] Instance:r-e5930ea5
but this example to return the instance ID doesnt
DECLARE
l_clob CLOB;
obj JSON;
reservations JSON_LIST;
l_tempobj JSON;
instance JSON;
L_id VARCHAR2(20);
BEGIN
obj:= json('{
"Reservations": [
{
"ReservationId": "r-5a33ea1a",
"Instances": [
{
"State": {
"Name": "stopped"
},
"InstanceId": "i-7e02503e"
}
]
},
{
"ReservationId": "r-e5930ea5",
"Instances": [
{
"State": {
"Name": "running"
},
"InstanceId": "i-77859692"
}
]
}
]
}');
reservations := json_list(obj.get('Reservations'));
l_tempobj := json(reservations);
DBMS_OUTPUT.PUT_LINE('============');
FOR i IN 1 .. l_tempobj.count
LOOP
DBMS_OUTPUT.PUT_LINE('------------');
instance := json(l_tempobj.get(i));
instance.print;
l_id := json_ext.get_string(instance, 'Instances.InstanceId');
DBMS_OUTPUT.PUT_LINE(i||'] Instance:'||l_id);
END LOOP;
END;
returning
============
------------
{
"ReservationId" : "r-5a33ea1a",
"Instances" : [{
"State" : {
"Name" : "stopped"
},
"InstanceId" : "i-7e02503e"
}]
}
1] Instance:
------------
{
"ReservationId" : "r-e5930ea5",
"Instances" : [{
"State" : {
"Name" : "running"
},
"InstanceId" : "i-77859692"
}]
}
2] Instance:
The only change from the first example to the second is replacing 'ReservationId' with 'Instances.InstanceId' but in the second example, although the function succeeds and the instance.print statement outputs the full json, this code doesnt populate the Instance ID into l_id so is not output on the DBMS_OUTPUT.
I also get the same result (i.e. no value in L_id) if I just use 'InstanceId'.
My assumption and from reading the examples suggested JSON PATH should allow me to select the values using either the dot notation for nested values but it doesnt seem to work. I also tried extracting 'Instances' into a temp variable if type JSON_LIST and then accessing it from there but also wasnt able to get a working example.
Any help appreciated. Many Thanks.
See ex8.sql. In particular, it says:
JSON Path for PL/JSON:
never raises an exception (null is returned instead)
arrays are 1-indexed
use dots to navigate through the json scopes.
the empty string as path returns the entire json object.
JSON Path only work with JSON as input.
7 get types are supported: string, number, bool, null, json, json_list and date!
spaces inside [ ] are not important, but is important otherwise
Thus, your path should be:
l_id := json_ext.get_string(instance, 'Instances[1].InstanceId');
Or, without directly using json_ext:
l_id := instance.path('Instances[1].InstanceId');
I want many gyp scripts to have a common target. So I decided to move it to a separate include file. Simplest test-case that produces an error:
foo.gyp
{
'includes' : [
'bar.gypi',
],
}
bar.gypi
{
'targets': [
{
'target_name' : 'phony',
'type' : 'none',
'actions' : [
{
'action_name' : '_phony_',
'inputs' : ['',],
'outputs' : ['',],
'action' : ['_phony_',],
'message' : '_phony_',
},
],
},
],
}
Produces error:
IndexError: string index out of range while reading includes of
foo.gyp while tr ying to load foo.gyp
Some observations:
If I delete actions from target, everything parses well
If I move targets (with actions) to foo.gyp, everything parses well
Am I doing something wrong?
It looks like the "outputs" list can not be empty or contain an empty string:
# gyp/make.py:893
self.WriteLn("%s: obj := $(abs_obj)" % QuoteSpaces(outputs[0]))
You may have empty inputs but in this case the phony action will shoot only once. I haven't found any mentions of phony actions in the GYP documentation, but I have the following variant working:
# bar.gypi
{
'targets': [
{
'target_name' : 'phony',
'type' : 'none',
'actions' : [
{
'action_name' : '_phony_',
'inputs' : ['./bar.gypi'], # The action depends on this file
'outputs' : ['test'], # Some dummy file
'action' : ['echo', 'test'],
'message' : 'Running phony target',
},
],
},
],
}
I could try to find a better way if you tell me more about the task you are trying to solve.