I'm using Loopback 3.x. I got an error lat must be a number when creating a GeoPoint while creating a custom API for finding nearby doctors.
user.json file
"findNearByDoctors": {
"accepts": [
{
"arg": "geoPoint",
"type": "geopoint",
"required": true,
"description": "user's location"
},
{
"arg": "range",
"type": "number",
"required": false,
"description": "range"
}
],
"returns": [],
"description": "find nearby doctors",
"http": [
{
"path": "/get-nearby-doctors",
"verb": "get"
}
]
}
Input I given is
geoPoint: (1.28210155945393, 103.81722480263163)
geoPoint: {"lat": 1.28210155945393, lng": 103.81722480263163}
Related
I have data with multiple dimensions, stored in the Druid cluster. for example, Data of movies and the revenue they earned from each country where they were screened.
I'm trying to build a query that the answer to be returned will be a table of all the movies, the total revenue of each of them, and the revenue for each country.
I succeeded to do it in Turnilo - it generated for me the following Druid query -
[
[
{
"queryType": "timeseries",
"dataSource": "movies_source",
"intervals": "2021-11-18T00:01Z/2021-11-21T00:01Z",
"granularity": "all",
"aggregations": [
{
"name": "__VALUE__",
"type": "doubleSum",
"fieldName": "revenue"
}
]
},
{
"queryType": "topN",
"dataSource": "movies_source",
"intervals": "2021-11-18T00:01Z/2021-11-21T00:01Z",
"granularity": "all",
"dimension": {
"type": "default",
"dimension": "movie_id",
"outputName": "movie_id"
},
"aggregations": [
{
"name": "revenue",
"type": "doubleSum",
"fieldName": "revenue"
}
],
"metric": "revenue",
"threshold": 50
}
],
[
{
"queryType": "topN",
"dataSource": "movies_source",
"intervals": "2021-11-18T00:01Z/2021-11-21T00:01Z",
"granularity": "all",
"filter": {
"type": "selector",
"dimension": "movie_id",
"value": "some_movie_id"
},
"dimension": {
"type": "default",
"dimension": "country",
"outputName": "country"
},
"aggregations": [
{
"name": "revenue",
"type": "doubleSum",
"fieldName": "revenue"
}
],
"metric": "revenue",
"threshold": 5
}
]
]
But it doesn't work when I'm trying to use it as a body for a Postman query - I got
{
"error": "Unknown exception",
"errorMessage": "Unexpected token (START_ARRAY), expected VALUE_STRING: need JSON String that contains type id (for subtype of org.apache.druid.query.Query)\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 2, column: 3]",
"errorClass": "com.fasterxml.jackson.databind.exc.MismatchedInputException",
"host": null
}
How should I build the corresponding query so that it works with Postman?
I am not familiar with Turnilo but have you tried using the Druid Console to write SQL and convert to Native request with the "Explain SQL query" option under the "Run/..." menu?
Your native queries seem to be doing a Top N instead of listing all movies, so I think the SQL might be something like:
SELECT movie_id, country_id, SUM(revenue) total_revenue
FROM movies_source
WHERE __time BETWEEN '2021-11-18 00:01:00' AND '2021-11-21 00:01:00'
GROUP BY movie_id, country_id
ORDER BY total_revenue DESC
LIMIT 50
I don't have the data source to test, but tested with sample wikipedia data with similar query structure:
SELECT namespace, cityName, sum(sum_added) total
FROM "wikipedia" r
WHERE cityName IS NOT NULL
AND __time BETWEEN '2015-09-12 00:00:00' AND '2015-09-15 00:00:00'
GROUP BY namespace, cityName
ORDER BY total DESC
limit 50
which results in the following Native query:
{
"queryType": "groupBy",
"dataSource": {
"type": "table",
"name": "wikipedia"
},
"intervals": {
"type": "intervals",
"intervals": [
"2015-09-12T00:00:00.000Z/2015-09-15T00:00:00.001Z"
]
},
"virtualColumns": [],
"filter": {
"type": "not",
"field": {
"type": "selector",
"dimension": "cityName",
"value": null,
"extractionFn": null
}
},
"granularity": {
"type": "all"
},
"dimensions": [
{
"type": "default",
"dimension": "namespace",
"outputName": "d0",
"outputType": "STRING"
},
{
"type": "default",
"dimension": "cityName",
"outputName": "d1",
"outputType": "STRING"
}
],
"aggregations": [
{
"type": "longSum",
"name": "a0",
"fieldName": "sum_added",
"expression": null
}
],
"postAggregations": [],
"having": null,
"limitSpec": {
"type": "default",
"columns": [
{
"dimension": "a0",
"direction": "descending",
"dimensionOrder": {
"type": "numeric"
}
}
],
"limit": 50
},
"context": {
"populateCache": false,
"sqlOuterLimit": 101,
"sqlQueryId": "cd5aabed-5e08-49b7-af63-fe82c125d3ee",
"useApproximateCountDistinct": false,
"useApproximateTopN": false,
"useCache": false
},
"descending": false
}
I'm working on an OpenAPI 3 schema.
I would like to use a data model from the components.schemas inside the responses content and have some required nested properties inside that data model. However, it doesn't seem like the required validation is being applied. I'm testing this in Postman with a mock server.
Here is my schema:
{
"openapi": "3.0.0",
"info": {
"version": "1.0.0",
"title": "Usage stats API"
},
"servers": [
{
"url": "http://some-middleware-endpoint.com"
}
],
"paths": {
"/publishers/{publisherId}/files/{fileId}": {
"get": {
"summary": "Get single file for publisher",
"parameters": [
{
"name": "publisherId",
"in": "path",
"description": "ID of the publisher",
"required": true,
"schema": {
"type": "integer",
"format": "int64"
}
},
{
"name": "fileId",
"in": "path",
"description": "ID of the file",
"required": true,
"schema": {
"type": "integer",
"format": "int64"
}
}
],
"responses": {
"200": {
"description": "File for publisher",
"headers": {
"Content-Type": {
"description": "application/json"
}
},
"content": {
"application/json": {
"schema": {
"type": "object",
"required": [
"meta"
],
"properties": {
"meta": {
"type": "object",
"required": ["page"],
"properties": {
"$ref": "#/components/schemas/Pagination"
}
}
}
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"Pagination": {
"properties": {
"page": {
"required": ["current-page", "per-page", "from", "to", "total", "last-page"],
"type": "object",
"properties": {
"current-page": {
"type": "integer"
},
"per-page": {
"type": "integer"
},
"from": {
"type": "integer"
},
"to": {
"type": "integer"
},
"total": {
"type": "integer"
},
"last-page": {
"type": "integer"
}
}
}
}
}
}
}
}
This response passes validation:
{
"meta": {
"page": {}
}
}
Even though all of the attributes I've required ("required": ["current-page", "per-page", "from", "to", "total", "last-page"]) are not present.
Basically, I would like page and all its nested properties to be required.
I guess I'm doing something wrong in defining the properties. Any help is appreciated!
Oh well, I guess my issue was pulling up the $ref one level up.
The following seems to work inside responses.content.
"meta": {
"type": "object",
"required": [
"page"
],
"$ref": "#/components/schemas/Pagination"
}
instead of
"meta": {
"type": "object",
"required": ["page"],
"properties": {
"$ref": "#/components/schemas/Pagination"
}
}
As per my understanding the User/login is a built-in remote method. On the explorer (swagger) its looks with all the needed details:
but on mine remote method, i don't have all the nice information such example and more:
How can i add Example Value also for my method which accept Object
Here is my json:
"methods": {
"specifyGurdianPhone": {
"accepts": [
{
"arg": "guardianPhone",
"type": "Object",
"required": true,
"description": "{guardianPhone: \"+97255111111\"}",
"http": {
"source": "body"
}
}
],
"returns": [
{
"arg": "success",
"type": "Object",
"root": true
}
],
"description": "",
"http": {
"verb": "post"
}
It's because your param and response has "object" type. Swagger doesn't know how it looks like. To have such view you need specify model names as the type or describe possible properties one by one.
Example1:
{
"specifyGurdianPhone": {
"accepts": [
{
"arg": "guardianPhone",
"type": "string", // !!! Now, swagger know the exact type of "guardianPhone" property
"required": true
"http": {
"source": "form" // !!! Having the "form" here we say that it a property inside an object
// (it allows us to have the "string" type of the "object")
}
}
],
"returns": [
{
"arg": "success",
"type": "GuardianPhone", // !!! For example, let's return the full "GuardianPhone" instance
"root": true
}
],
"description": "",
"http": {
"verb": "post"
}
}
Example2:
{
"specifyGurdianPhone": {
"accepts": [
{
"arg": "guardianPhone",
"type": "object",
"model": "GuardianPhone" // !!! Another way to let swagger know the type of the body
// (the same will be true, if you make the type "GuardianPhone" instead of "object" and delete "model" property)
"required": true
"http": {
"source": "body"
}
}
],
"returns": [
{
...
]
}
Example3:
{
"specifyGurdianPhone": {
"accepts": [
{
"arg": "guardianPhone",
"type": "object",
"model": "GuardianPhone" // !!! Another way to let swagger know the type of the body
// (the same will be true, if you make the type "GuardianPhone" instead of "object" and delete "model" property)
"required": true
"http": {
"source": "body"
}
}
],
"returns": [
{
"arg": "success",
// !!! Instead of a model name you can describe properties one by one,
// but this trick will not work with arrays (it's true for "accepts" also)
// !!! WARNING You need strong-remoting v3.15.0 or higher due to https://github.com/strongloop/loopback/issues/3717 for this approach
"type": {
"id": {"type": "string", "description": "An id property"},
"guardianPhone": {"type": "string"}
},
"root": true
}
],
"description": "",
"http": {
"verb": "post"
}
}
when I POST to api/testmodel using an object with only the required fields, the object is being created correctly in the DB. However, I only get the object I sent in the request body. I'm trying to get the full object with null fields in the response.
Thanks for the help!
{
"name": "test",
"plural": "test",
"base": "PersistedModel",
"idInjection": true,
"replaceOnPUT": false,
"properties": {
"city": {
"type": "string",
"length": 100
},
"name": {
"type": "string",
"required": true,
"length": 100
},
"id": {
"type": "string",
"id": true,
"required": true,
},
"officePhone": {
"type": "string",
"length": 100
},
"status": {
"type": "string",
"required": false,
"length": 200
},
"street": {
"type": "string",
"length": 100
}
},
"methods": {}`
Then you need to create default values for your model, for example city:
"properties": {
"city": {
"type": "string",
"length": 100,
"default": ""
},
...
In your controller, after you have created your new record and have the record ID, perform a findById query and return that object instead of the object returned from create. This should give you a response similar to a GET route.
I would like to upgrade my AWS data pipeline definition to EMR 4.x or 5.x, so I can take advantage of Hive's latest features (version 2.0+), such as CURRENT_DATE and CURRENT_TIMESTAMP, etc.
The change from EMR 3.x to 4.x/5.x requires the use of releaseLabel in EmrCluster, versus amiVersion.
When I use a "releaseLabel": "emr-4.1.0", I get the following error: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
Below is my data pipeline definition, for EMR 3.x. It works well, so I hope others find this useful (including the answer for emr 4.x/5.x), as the common answer/recommendation to importing data into DynamoDB from a file is to use Data Pipeline, but literally no one has put forward a solid & simple working example (say for custom data format).
{
"objects": [
{
"type": "DynamoDBDataNode",
"id": "DynamoDBDataNode1",
"name": "OutputDynamoDBTable",
"dataFormat": {
"ref": "DynamoDBDataFormat1"
},
"region": "us-east-1",
"tableName": "testImport"
},
{
"type": "Custom",
"id": "Custom1",
"name": "InputCustomFormat",
"column": [
"firstName", "lastName"
],
"columnSeparator" : "|",
"recordSeparator" : "\n"
},
{
"type": "S3DataNode",
"id": "S3DataNode1",
"name": "InputS3Data",
"directoryPath": "s3://data.domain.com",
"dataFormat": {
"ref": "Custom1"
}
},
{
"id": "Default",
"name": "Default",
"scheduleType": "ondemand",
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://logs.data.domain.com"
},
{
"type": "HiveActivity",
"id": "HiveActivity1",
"name": "S3ToDynamoDBImportActivity",
"output": {
"ref": "DynamoDBDataNode1"
},
"input": {
"ref": "S3DataNode1"
},
"hiveScript": "INSERT OVERWRITE TABLE ${output1} SELECT reflect('java.util.UUID', 'randomUUID') as uuid, TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP())) as loadDate, firstName, lastName FROM ${input1};",
"runsOn": {
"ref": "EmrCluster1"
}
},
{
"type": "EmrCluster",
"name": "EmrClusterForImport",
"id": "EmrCluster1",
"coreInstanceType": "m1.medium",
"coreInstanceCount": "1",
"masterInstanceType": "m1.medium",
"amiVersion": "3.11.0",
"region": "us-east-1",
"terminateAfter": "1 Hours"
},
{
"type": "DynamoDBDataFormat",
"id": "DynamoDBDataFormat1",
"name": "OutputDynamoDBDataFormat",
"column": [
"uuid", "loadDate", "firstName", "lastName"
]
}
],
"parameters": []
}
A sample file could look like
John|Doe
Jane|Doe
Carl|Doe
Bonus: rather than setting CURRENT_DATE in a column, how I can set as a variable in the hiveScript section? I tried SET loadDate = CURRENT_DATE;\n\n INSERT OVERWRITE..." to no avail. Not shown in my example are other dynamic fields I would like to set before the query clause.