AWS Cloudsearch - Structured query parser - amazon-web-services

I am using a structured query parser in Cloudsearch and need to allow multi-word search.
Forex: A user is searching for Play football and types "Pl Foo"
Using Simple Parser, I can use
"Pl Foo"** and it works.
But I need to use the structured query parser.
(phrase 'Pl Foo')**
does not work.
How to make this work in cloudsearch using Structured query parser?

Related

Django Elasticsearch dsl drf OR query

How to make OR query (using django-elasticsearch-dsl-drf) with multiple contains statements?
Query in words: Search for objects where title contains "hello" OR description contains "world"
I can't find about it in documentation: https://django-elasticsearch-dsl-drf.readthedocs.io/en/0.3.12/advanced_usage_examples.html#usage-example
what you need is boolean query with should clause, its very easy to construct in JSON format, and validate the search results.
For Django, although I am not familiar, this page of queries DSL and specifically this sub-section should fix your issue.

How to use jsonpath in ballerina?

I was trying to process some json data. I would like to know whether it is possible to use jsonPath in ballerina 0.990.2? Something similar to $..temperature[?(#.celcius<25)]
Ballerina does not support the JSON path spec. However, it supports a field-based-access and index-based-access syntax for JSON, where you can access fields of JSON-objects and elements of JSON-arrays similar to that of JSON path.
eg:
json j = ...;
string name = j.a.b[3].c;
Not all the features of JSON paths such as wildcards, filters, etc. are supported yet.

haystack query for solr to find items where attribute is unset or specified value

I'm trying to query solr through haystack for all objects that either does not have an attribute (it's Null) or the attribute is a specified value.
I can query solr directly with the snippet (brand:foo OR (*:* -brand:*)) and get what I want. But I can't find a way to formulate this or anything logically the same through haystack without really ugly hacks.
I did find this ugly hack:
SearchQuerySet().filter(brand=Raw('%s OR (*:* -brand:*)' % Clean('foo'))
But it chains really poorly with that OR in there without any parenthesis around it.
Ideally a solution using a pure filter would be best, but failing that a way to add a chainable filter using raw solr query language.
I'm using django-haystack 2.4.0
It's not a perfect match, but narrow helps me enough to let me do what I want
SearchQuerySet().narrow('(brand:%s OR (*:* -brand:*))' % Clean('foo'))

Solr: List results by distance

I'd like to pass some parameters to Solr that should afflict the weighting of the results (I do not want to filter away results that do not match these criterias).
E.g. I'd like to have a language attribute, and if i pass the user's language to the search engine I'd like to have the results matching the language listed first. As a newbie to Solr I'd like to know if and how this is possible!
Yes, that's possible by using boost functions. See this FAQ entry or the description of boost functions for the DisMaxQueryPlugin (the dismax query parser is the default parser).

Inexact full-text search in PostgreSQL and Django

I'm new to PostgreSQL, and I'm not sure how to go about doing an inexact full-text search. Not that it matters too much, but I'm using Django. In other words, I'm looking for something like the following:
q = 'hello world'
queryset = Entry.objects.extra(
where=['body_tsv ## plainto_tsquery(%s)'],
params=[q])
for entry in queryset:
print entry.title
where I the list of entries should contain either exactly 'hello world', or something similar. The listings should then be ordered according to how far away their value is from the specified string. For instance, I would like the query to include entries containing "Hello World", "hEllo world", "helloworld", "hell world", etc., with some sort of ranking indicating how far away each item is from the perfect, unchanged query string.
How would you go about doing this?
Your best bet is to use Django raw querysets, I use it with MySQL to perform full text matching. If the data is all in the database and Postgres provides the matching capability then it makes sense to use it. Plus Postgres offers some really useful things in terms of stemming etc with full text queries.
Basically it lets you write the actual query you want yet returns models (as long as you are querying a model table obviously).
The advantage this gives you is that you can test the exact query you will be using first in Postgres, the documentation covers full text queries pretty well.
The main gotcha with raw querysets at the moment is they don't support count. So if you will be returning lots of data and have memory constraints on your application you might need to do something clever.
"Inexact" matching however isn't really part of the full text searching capabilities. Instead you want the postgres fuzzystrmatch contrib module. It's use is described here with indexes.
The best would be to use a search engine for this purpose. Django-haystack supports the integration of three different search engines.
In 2022, Django supports full text search with postgres. Full documentation here: https://docs.djangoproject.com/en/4.0/ref/contrib/postgres/search/