Equivalent of `GET _mapping` marvel/sense in Python? - python-2.7

I'm trying to explore an elasticsearch cluster using python, and I'm new to elasticsearch. If I use Marvel/Sense, I can see the cluster's schema using GET _mapping. Is there an equivalent way to do this in Python? If so I can see the "schema" of the cluster!
More generally, I'd like to discover programmatically all the indicies, each indices' doc_types, classify the doc_types' fields (are they text strings, ints, floats, what range to the numeric ones take, ..) basically learn the schema and basic statistics of each field. If there is a better way than GET _mapping to start this project, I'm all ears.
This is related to this question, where they are looking for a list of indices using Python, but is more general.

You can do that with pyelasticsearch. This is how you can do GET _mapping
in python.
From the Docs
get_mapping(index=None, doc_type=None) [source]
Fetch the mapping definition for a specific index and type.
Parameters:
index – An index or iterable thereof
doc_type – A document type or iterable thereof
Omit both arguments to get mappings for all types and indexes.
Explore API to learn more

Related

C++/parquet: How to write "Map" data (nested type)?

Based on the "StreamWriter" example (stream_reader_writer), I'm trying to add another column to the parquet schema which I want to write "Map-Data" (Map<Integer, String>) to.
According to this source, maps are supported by the C++ implementation. However, it's not clear to me how to ...
correctly add/define another column of type "Map" to the parquet schema. According to the previous link, Maps are "Logical Types", so my best try so far is
fields.push_back(parquet::schema::PrimitiveNode::Make("Map", parquet::Repetition::REQUIRED, parquet::LogicalType::Map(), parquet::Type::INT64));
This compiles without error but I'm uncertain about the last parameter "primitive_type" (parquet::Type::INT64).
how to define and write actual Map values. Again, the second links provides a general description of a parquet Map:
The github repo is missing a C++ example (or test case) on how to use a Map/Logical Types. I'm looking for advice/an C++ example, how to define and use a Map.

How to find entity in search query in Elasticsearch?

I'm using Elasticsearch to build search for ecommerece site.
One index will have products stored in it, in products index I'll store categories in it's other attributes along with. Categories can be multiple but the attribute will have single field value. (E.g. color)
Let's say user types in Black(color) Nike(brand) shoes(Categories)
I want to process this query so that I can extract entities (brand, attribute, etc...) and I can write Request body search.
I have tought of following option,
Applying regex on query first to extract those entities (But with this approach not sure how Fuzzyness would work, user may have typo in any of the entity)
Using OpenNLP extension (But this one only works on indexation time, in above scenario we want it on query side)
Using NER of any good NLP framework. (This is not time & cost effective because I'll have millions of products in engine also they get updated/added on frequent basis)
What's the best way to solve above issue ?
Edit:
Found couple of libraries which would allow fuzzy text matching in regex. But the entities to find will be many, so what's the best solution to optimise that ?
Still not sure about OpenNLP
NER won't work in this case because there are fixed number of entities so prediction is not right when there are no entity available in the query.
If you cannot achieve desired results with tuning of built-in ElasticSearch scoring/boosting most likely you'll need some kind of 'natural language query' processing:
Tokenize free-form query. Regex can be used for splitting lexems, however very often it is better to write custom tokenizer for that.
Perform named-entity recognition to determine possible field(s) for each keyword. At this step you will get associations like (Black -> color), (Black -> product name) etc. In fact you don't need OpenNLP for that as this should be just an index (keyword -> field(s)), and you can try to use ElasticSearch 'suggest' API for this purpose.
(optional) Recognize special phrases or combinations like "released yesterday", "price below $20"
Generate possible combinations of matches, and with help of special scoring function determine 'best' recognition result. Scoring function may be hardcoded (reflect 'common sense' heuristics) or it this may be a result of machine learning algorithm.
By recognition result (matches metadata) produce formal query to produce search results - this may be ElasticSearch query with field hints, or even SQL query.
In general, efficient NLQ processing needs significant development efforts - I don't recommend to implement it from scratch until you have enough resources & time for this feature. As alternative, you can try to find existing NLQ solution and integrate it, but most likely this will be commercial product (I don't know any good free/open-source NLQ components that really ready for production use).
I would approach this problem as NER tagging considering you already have corpus of tags. My approach for this problem will be as below:
Create a annotated dataset of queries with each word tagged to one of the tags say {color, brand, Categories}
Train a NER model (CRF/LSTMS).
This is not time & cost effective because I'll have millions of
products in engine also they get updated/added on frequent basis
To handle this situation I suggest dont use words in the query as features but rather use the attributes of the words as features. For example create an indicator function f(x',y) for word x with context x' (i.e the word along with the surrounding words and their attributes) and tag y which will return a 1 or 0. A sample indicator function will be as below
f('blue', 'y') = if 'blue' in `color attribute` column of DB and words previous to 'blue' is in `product attribute` column of DB and 'y' is `colors` then return 1 else 0.
Create lot of these indicator functions also know as features maps.
These indicator functions are then used to train a models using CRFS or LSTMS. Finially we use viterbi algorithm to find the best tagging sequence for your query. For CRFs you can use packages like CRFSuite or CRF++. Using these packages all you have go do is create indicator functions and the package will train a model for you. Once trained you can use this model to predict the best sequence for your queries. CRFs are very fast.
This way of training without using vector representation of words will generalise your model without the need of retraining. [Look at NER using CRFs].

how to evaluate sql query using RelNode object

I am trying to convert sql query to Tinkerpop Gremlin. sql2Gremlin library does it but it looks on join as relation while I am relying on no join approach where you can refer relations with dot as delimiter between two entity.
I have parsed and validated query and I have RelRoot object.
Apache calcite returns RelRoot object which is root of algebraic expression.
Lets say I dont want to apply any query optimization, How do i use my RelNode Visitor to transform the RelRoot into TinkerPop Gremlin DSL.
Ideally I would first use From clause and then apply filters defined in where clause? How is select, filters, From clause represent in RelRoot tree?
What does apache calcite means by relational expression or RelNode?
Rephrasing the same question without TinkerPop Gremlin context:
How should I use RelRoot visitor to visit the RelRoot and transform the query to another DSL?
I don't know why you insist on RelRoot and not RelNode tree, but Apache Calcite is doing its optimizations of relational algebra in RelNode stack. There is a class called RelVisitor that you might find interesting, since it can do exactly what you need: visit all RelNodes. You can then extract information you need from them and build your DSL with it.
EDIT: In RelVisitor, you have access to the parent node and the child nodes of the currently visited node. You can extract all the information usually available to the RelNode object (see docs), and if you cast it to specific relational algebra operation, for example, Project, you can extract what fields are inside Project operation by doing node.getRowType().getFieldList().forEach(field -> names.add(field.getName())), where names is a previously defined Set<String>. You can find the full code here.
You should also take a look at the algebra docs to understand how SQL maps to relational algebra in Calcite before attempting this.

How to build query form to request AWS CloudSearch?

I have a SearchDomain on AWS CloudSearch. I know all the defined facets names.
I would like to build a web query form to use it, but I want to add my categories values (facets) on the side, like it is done on Amazon webstore
The only way I have to get facets values is to make a query (params query) and in the answer will contain facets linked to my query results.
Is there a way to fetch all the facet.FIELD possible values to build the query form ?
If not (as read here), how to design a form using facets ?
You could also use the matchall keyword in a structured query. Also, since you don't need the results you can pass size=0 so you only get the facets which will reduce latency.
/2013-01-01/search?q=matchall&q.parser=structured&size=0&facet.field={sort:'count',size:100}

What is a good design pattern to implement a dynamic data importer tool?

We are planning to build a dynamic data import tool. Basically taking information on one end in a specified format (access, excel, csv) and upload it into an web service.
The situation is that we do not know the export field names, so the application will need to be able to see the wsdl definition and map to the valid entries in the other end.
In the import section we can define most of the fields, but usually they have a few that are custom. Which I see no problem with that.
I just wonder if there is a design pattern that will fit this type of application or help with the development of it.
I am not sure where the complexity is in your application, so I will just give an example of how I have used patterns for importing data of different formats. I created a factory which takes file format as argument and returns a parser for particular file format. Then I use the builder pattern. The parser is provided with a builder which the parser calls as it is parsing the file to construct desired data objects in application.
// In this example file format describes a house (complex data object)
AbstractReader reader = factory.createReader("name of file format");
AbstractBuilder builder = new HouseBuilder(list_of_houses);
reader.import(text_stream, builder);
// now the list_of_houses should contain an extra house
// as defined in the text_stream
I would say the Adaptor Pattern, as you are "adapting" the data from a file to an object, like the SqlDataDataAdapter does it from a Sql table to a DataTable
have a different Adaptor for each file type/format? example SqlDataAdptor, MySqlDataAdapter, they handle the same commands but different datasources, to achive the same output DataTable
Adaptor pattern
HTH
Bones
Probably Bridge could fit, since you have to deal with different file formats.
And Façade to simplify the usage. Handle my reply with care, I'm just learning design patterns :)
You will probably also need Abstract Factory and Command patterns.
If the data doesn't match the input format you will probably need to transform it somehow.
That's where the command pattern come in. Because the formats are dynamic, you will need to base the commands you generate off of the input. That's where Abstract factory is useful.
Our situation is that we need to import parametric shapes from competitors files. The layout of their screen and data fields are similar but different enough so that there is a conversion process. In addition we have over a half dozen competitor and maintenance would be a nightmare if done through code only. Since most of them use tables to store their parameters for their shapes we wrote a general purpose collection of objects to convert X into Y.
In my CAD/CAM application the file import is a Command. However the conversion magic is done by a Ruleset via the following steps.
Import the data into a table. The field names are pulled in as well depending on the format.
We pass the table to a RuleSet. I will explain the structure the ruleset in a minute.
The Ruleset transform the data into a new set of objects (or tables) which we retrieve
We pass the result to the rest of the software.
A RuleSet is comprise of set of Rules. A Rule can contain another Rule. A rule has a CONDITION that it tests, and a MAP TABLE.
The MAP TABLE maps the incoming field with a field (or property) in the result. There are can be one mapping or a multitude. The mapping doesn't have to involve just poking the input value into a output field. We have a syntax for calculation and string concatenation as well.
This syntax is also used in the Condition and can incorporate multiple files like ([INFIELD1] & "-" & [INFIELD2])="A-B" or [DIM1] + [DIM2] > 10. Anything between the brackets is substituted with a incoming field.
Rules can contain other Rules. The way this works is that in order for a sub Rule mapping to apply both it's condition and those of it's parent (or parents) have to be true. If a subRule has a mapping that conflicts with a parent's mapping then the subRule Mapping applies.
If two Rules on the same level have condition that are true and have conflicting mapping then the rule with the higher index (or lower on the list if you are looking at tree view) will have it's mapping apply.
Nested Rules is equivalent to ANDs while rules on the same level are equivalent of ORs.
The result is a mapping table that is applied to the incoming data to transform it to the needed output.
It is amicable to be being displayed in a UI. Namely a Treeview showing the rules hierarchy and a side panel showing the mapping table and conditions of the rule. Just as importantly you can create wizards that automate common rule structures.