How do I query an array within loopback 3?
I have the following method:
Driver.reserve = async function(cb) {
let query = {
where: {
preferred_delivery_days: {
elemMatch: {
availability: 0
}
}
}
};
return await app.models.Driver.find(query);
};
But I am getting the following error:
code: ER_PARSE_ERROR
errno: 1064
sqlMessage: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''{\"availability\":0}' ORDER BY `id`' at line 1
sqlState: 42000
sql: SELECT
driver021489826505814413.`first_name`,driver021489826505814413.`last_name`,driver021489826505814413.`gender`,driver021489826505814413.`preferred_delivery_days` FROM `my_driver_table` driver021489826505814413 WHERE driver021489826505814413.`preferred_delivery_days`'{\"availability\":0}' ORDER BY `id`
Here is an example of a database entry:
[
{
"day": 5,
"time": "morning",
"availability": 0
}
]
I think it might be hard, since according to the docs:
Data source connectors for relational databases don’t support filtering nested properties.
If your project is in the starting phase you may consider to change db to mongo or other no-sql db?
Related
I am creating a incremental table which pulls data from two tables(uses UNION on the two tables)
select * from ${ref("tableTrinity")}
${when(incremental(), `WHERE created_time > (SELECT MAX(created_time) FROM ${self()})`) }
UNION ALL
select * from ${ref("tableDavis")}
${when(incremental(), `WHERE created_time > (SELECT MAX(created_time) FROM ${self()})`) }
On both the tables I have applied assertion for unique key to be not null. To test I updated tableTrinity single row key value to null and executed the incremental sqlx script.
The Union works with no failures and the null value is pulled in.
Below Config- tableTrinity
config {
type: "table",
description: "Table for location Trinity & 6th Street",
assertions: {
nonNull: ["trip_id"],
uniqueKey: ["trip_id"]
},
tags: ["Derived Table","Non PII"]
}
Config for incremental -
config {
type: "incremental",
dependencies: ["dataformTest_tableTrinity_assertions_rowConditions"],
description: "Incremental Test",
tags : ["Dependent Table"],
uniqueKey: ["trip_id"],
assertions: {
nonNull: ["trip_id"]
},
bigquery: {
labels: {
department: "bikes",
"cost-center": "mechanics"
}
}
}
Though they have documentation on assertions, I could not find any eg. or video of how this works, how the orchestration is stopped on failures, or how to review the failed data.
Has anyone been able to implement assertions to stop an execution on assertion failure?
According to the document of Apache Druid about native query
Timeseries queries normally fill empty interior time buckets with zeroes.
For example, if you issue a "day" granularity timeseries query for the interval 2012-01-01/2012-01-04, and no data exists for 2012-01-02, you will receive:
[
{
"timestamp": "2012-01-01T00:00:00.000Z",
"result": { "sample_name1": <some_value> }
},
{
"timestamp": "2012-01-02T00:00:00.000Z",
"result": { "sample_name1": 0 }
},
{
"timestamp": "2012-01-03T00:00:00.000Z",
"result": { "sample_name1": <some_value> }
}
]
This could be controlled by the value of context flag "skipEmptyBuckets", and the default value is false (do not skip the empty bucket by zero-filling).
However, when querying timeseries data with Druid SQL, the default behavior is to skip all empty buckets. I have to set query context explicitly to get the results I want.
"context": {
"skipEmptyBuckets": true
}
This troubles me a lot because I need zero-filling to show all buckets in Apache Superset's timeseries charts. But there's no way to set the query context.
As far as I know, the SQL statement internally is translated to native query, so why is the inconsistency?
I created a logsink on folder level, so it neatly streams all the logs to Bigquery. In the logsink configuration, I specified the following options to let the logsink stream to (daily) partitions:
"bigqueryOptions": {
"usePartitionedTables": true,
"usesTimestampColumnPartitioning": true # output only
}
According to the bigquery documentation and bigquery resource type, I would assume that this would automatically create partitions, but it doesn't. I verified that it didn't create the partitions with the following query:
#LegacySQL
SELECT table_id, partition_id from [dataset1.table1$__PARTITIONS_SUMMARY__];
Gives me:
[
{
"table_id": "table1",
"partition_id": "__UNPARTITIONED__"
}
]
Is there something I am missing here? It should have partitioned by date.
The problem was that I did not wait long enough for the first partition to become active. Basically, a logsink streams data as unpartitioned. After a while, the data is partitioned by date, which is only visible after a few hours for the partition of today. Problem solved!
[
{
"table_id": "table1",
"partition_id": "__UNPARTITIONED__"
},
{
"table_id": "table1",
"partition_id": "20200510"
},
{
"table_id": "table1",
"partition_id": "20200511"
}
]
I have entities that look like that:
{
name: "Max",
nicknames: [
"bestuser"
]
}
how can I query for nicknames to get the name?
I have created the following index,
indexes:
- kind: users
properties:
- name: name
- name: nicknames
I use the node.js client library to query the nickname,
db.createQuery('default','users').filter('nicknames', '=', 'bestuser')
the response is only an empty array.
Is there a way to do that?
You need to actually fetch the query from datastore, not just create the query. I'm not familiar with the nodejs library, but this is the code given on the Google Cloud website:
datastore.runQuery(query).then(results => {
// Task entities found.
const tasks = results[0];
console.log('Tasks:');
tasks.forEach(task => console.log(task));
});
where query would be
const query = db.createQuery('default','users').filter('nicknames', '=', 'bestuser')
Check the documentation at https://cloud.google.com/datastore/docs/concepts/queries#datastore-datastore-run-query-nodejs
The first point to notice is that you don't need to create an index to this kind of search. No inequalities, no orders and no projections, so it is unnecessary.
As Reuben mentioned, you've created the query but you didn't run it.
ds.runQuery(query, (err, entities, info) => {
if (err) {
reject(err);
} else {
response.resultStatus = info.moreResults;
response.cursor = info.moreResults == TNoMoreResults? null: info.endCursor;
resolve(entities);
};
});
In my case, the response structure was made to collect information on the cursor (if there is more data than I've queried because I've limited the query size using limit) but you don't need to anything more than the resolve(entities)
If you are using the default namespace you need to remove it from your query. Your query needs to be like this:
const query = db.createQuery('users').filter('nicknames', '=', 'bestuser')
I read the entire plob as a string to get the bytes of a binary file here. I imagine you simply parse the Json per your requirement
I am using Elasticsearch in my application to search for a matching word anywhere in a table.
This is the query string i have used to fetch my result:
search({ query: { prefix: { _all: keywords }}, sort: [ { start_date: 'asc', start_time: 'asc' } ] })
The selected records were then being queried with the dates to match the date range(s) specified in the application, by the following query:
where("status_id= ? and active=? and (((start_date >= ?) and (start_date <= ?))
or ((start_date <= ?) and (? <= end_date)))",2,true,range_start_date,
range_end_date,range_start_date,range_start_date)
But i know this is not a good way to fetch results. Now i want to modify this to fetch just the required data from elasticsearch index.
After a long search i found "query_string" and "simple_query_string" to match my requirement. But i am unsuccessful till now to get the required result.
How can i append the query with the elasticsearch result to get the required records?
Can someone please help?
Thanks in advance.
Finally, i was able to find the answer for the question myself. I was able to filter the searched content according to date with a "filter" keyword.
I modified the query as:
search_query = {
query: {
prefix: {
_all: keywords
}
},
filter: {
query: {
query_string: {
query: "status_id:2 AND active:true AND
((start_date:>=#{range_start_date} AND start_date:<=#{range_end_date}) (start_date:<=#{range_start_date} AND end_date:>=#{range_start_date}))"
}
}
},
sort: [ { start_date: 'asc', start_time: 'asc' } ] }
And finally fetched the result by:
#result = self.search(search_query)
If there is anyway i could modify this code, please suggest. Thank You.