I want to compare data in assets and servers.
I have a map for my image in the asset and I got a list of images from the server.
so I want to compare them in my init state and if they are equal, use asset and if it's not equal, use link of server.
here is my map :
Map icons ={
"sport" : {
"light": MyIcons.varzeshi,
"dark" : MyIcons.varzeshi_dark,
},
"Cinema & Photography" : {
"light": MyIcons.namayeshi,
"dark" : MyIcons.namayeshi_dark,
},
"History" : {
"light": MyIcons.tarikh,
"dark" : MyIcons.tarikh_dark,
},
"Story" : {
"light": MyIcons.revayatgari,
"dark" : MyIcons.revayatgari_dark,
},
"Psychology" : {
"light": MyIcons.ravanshenasi,
"dark" : MyIcons.ravanshenasi_dark,
},
"Entertainment" : {
"light": MyIcons.sargarmi,
"dark" : MyIcons.sargarmi_dark,
},
"Literature" : {
"light": MyIcons.adabiat,
"dark" : MyIcons.adabiat_dark,
},
"Technology" : {
"light": MyIcons.fanavari,
"dark" : MyIcons.fanavari_dark,
},
"News & Review" : {
"light": MyIcons.khabari,
"dark" : MyIcons.khabari_dark,
},
"book reading" : {
"light": MyIcons.ketab,
"dark" : MyIcons.ketab_dark,
},
"Cultural & Social" : {
"light": MyIcons.ejtemaei,
"dark" : MyIcons.ejtemaei_dark,
},
"Health" : {
"light": MyIcons.pezeshki,
"dark" : MyIcons.pezeshki_dark,
},
"language" : {
"light": MyIcons.zabaan,
"dark" : MyIcons.zabaan_dark,
},
"Music" : {
"light": MyIcons.mosighi,
"dark" : MyIcons.mosighi_dark,
},
"Business" : {
"light": MyIcons.kasbkar,
"dark" : MyIcons.kasbkar_dark,
},
"Philosophy & Art" : {
"light": MyIcons.falsafedin,
"dark" : MyIcons.falsafedin_dark,
},
"Self improvement & Lifestyle" : {
"light": MyIcons.toseEfardi,
"dark" : MyIcons.toseEfardi_dark,
},
};
and here is my data from the server :
data[index]
server data :
{
"Cat_Image_Dark": "https://varzeshi-dark.svg",
"Cat_Image": "https://varzeshi.svg",
"Cat_EN_Name": "Sport"
},
so how can I find the link to the server in the map?
I am assuming that the data from the server is coming from some sort of API and is passed in as a string. If so:
import 'dart:convert';
var dataFromServer = json.decode(data[index]);
print(dataFromServer["Cat_Image"]); //prints "https://varzeshi.svg"
Related
I have the following data
{
"companyID" : "amz",
"companyType" : "ret",
"employeeID" : "ty-5a62fd78e8d20ad"
},
{
"companyID" : "amz",
"companyType" : "ret",
"employeeID" : "ay-5a62fd78e8d20ad"
},
{
"companyID" : "mic",
"companyType" : "cse",
"employeeID" : "by-5a62fd78e8d20ad"
},
{
"companyID" : "ggl",
"companyType" : "cse",
"employeeID" : "ply-5a62fd78e8d20ad"
},
{
"companyID" : "ggl",
"companyType" : "cse",
"employeeID" : "wfly-5a62ad"
}
I want the following result. basically combination of values like this mic-cse,ggl-cse,amz-ret .
"agg_by_company_type" : {
"buckets" : [
{
"key" : "ret",
"doc_count" : 1
},
{
"key" : "cse",
"doc_count" : 2
}
]
How do I do it?
I have tried the following aggregations:
"agg_by_companyID_topHits": {
"terms": {
"field": "companyID.keyword",
"size": 100000,
"min_doc_count": 1,
"shard_min_doc_count": 0,
"show_term_doc_count_error": true,
"order": {
"_key": "asc"
}
},
"aggs": {
"agg_by_companyType" : {
"top_hits": {
"size": 1,
"_source": {
"includes": ["companyType"]
}
}
}
}
}
But this just gives me first groupBy of company id now on top of that data I want count of company type.
this is the response I get
"agg_by_companyID_topHits" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "amz",
"doc_count" : 2,
"doc_count_error_upper_bound" : 0,
"agg_by_companytype" : {
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.0,
"hits" : [
{
"_index" : "my-index",
"_type" : "_doc",
"_id" : "uytuygjuhg",
"_score" : 0.0,
"_source" : {
"companyType" : "ret"
}
}
]
}
}
},
{
"key" : "mic",
"doc_count" : 1,
"doc_count_error_upper_bound" : 0,
"agg_by_companytype" : {
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 0.0,
"hits" : [
{
"_index" : "my-index",
"_type" : "_doc",
"_id" : "uytuygjuhg",
"_score" : 0.0,
"_source" : {
"companyType" : "cse"
}
}
]
}
}
},
{
"key" : "ggl",
"doc_count" : 2,
"doc_count_error_upper_bound" : 0,
"agg_by_companytype" : {
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.0,
"hits" : [
{
"_index" : "my-index",
"_type" : "_doc",
"_id" : "uytuygjuhg",
"_score" : 0.0,
"_source" : {
"companyType" : "ret"
}
}
]
}
}
},
]
}
If it were spark, it would be simple to partition by companyID, group it and then group by companyType and count to get the desired result but not sure how to do it in ES.
Important Note: I am working with Opensearch.
Possible solution for this in elastic search multi-terms-aggregation
is not available in versions before v7.12.
So wondering how it was done before this feature in ES.
We came across this issue because AWS migrated from ES to Opensearch.
use multi_terms agg doc here
GET /products/_search
{
"aggs": {
"genres_and_products": {
"multi_terms": {
"terms": [{
"field": "companyID"
}, {
"field": "companyType"
}]
}
}
}
}
can you use script in terms agg ,like this:
GET b1/_search
{
"aggs": {
"genres": {
"terms": {
"script": {
"source": "doc['companyID'].value+doc['companyType'].value",
"lang": "painless"
}
}
}
}
}
I've created my index below using Kibana which connected to my AWS ES domain:
PUT sals_poc_test_20210217-7
{
"settings" : {
"index" : {
"number_of_shards" : 10,
"number_of_replicas" : 1,
"max_result_window": 50000,
"max_rescore_window": 50000
}
},
"mappings": {
"properties": {
"identifier": {
"type": "keyword"
},
"CLASS_NAME": {
"type": "keyword"
},
"CLIENT_ID": {
"type": "keyword"
}
}
}
}
then I've indexed 100 documents, using below command returns all 100 documents:
POST /sals_poc_test_20210217-7/_search
{
"query": {
"match": {
"_index": "sals_poc_test_20210217-7"
}
}
}
two sample documents are below:
{
"_index" : "sals_poc_test_20210217-7",
"_type" : "_doc",
"_id" : "cd0a3723-106b-4aea-b916-161e5563290f",
"_score" : 1.0,
"_source" : {
"identifier" : "xweeqkrz",
"class_name" : "/Sample_class_name_1",
"client_id" : "random_str"
}
},
{
"_index" : "sals_poc_test_20210217-7",
"_type" : "_doc",
"_id" : "cd0a3723-106b-4aea-b916-161e556329ab",
"_score" : 1.0,
"_source" : {
"identifier" : "xweeqkra",
"class_name" : "/Sample_class_name_2",
"client_id" : "random_str_2"
}
}
but when I wanted to search by CLASS_NAME by below command:
POST /sals_poc_test_20210217-7/_search
{
"size": 200,
"query": {
"bool": {
"must": [
{ "match": { "CLASS_NAME": "/Sample_class_name_1"}}
]
}
}
}
Not only the documents that match this class_name returned, but also other ones.
Anyone could shed any light into this case please?
I'm suspecting the way I wrote my search query is problematic. But cannot figure out why.
Thanks!
Elastic search, is case sensitive. class_name is not equal to CLASS_NAME sample documents seems to have class_name but mapping in index seems to have 'CLASS_NAME.
If we GET sals_poc_test_20210217-7, both class name attributes should be in the index mapping. The one when creating the index and second one created when adding documents to index.
so, query should be on CLASS_NAME or class_name.keyword , by default elastic search creates both text and .keyword field for dynamic attributes
"CLASS_NAME" : {
"type" : "keyword"
},
"class_name" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
My JSON_Respon from googlemap API give
%{ body: body} = HTTPoison.get! url
body = {
"geocoded_waypoints" : [{ ... },{ ... }],
"routes" : [{
"bounds" : { ...},
"copyrights" : "Map data ©2018 Google",
"legs" : [
{
"distance" : {
"text" : "189 km",
"value" : 188507
},
"duration" : {
"text" : "2 hours 14 mins",
"value" : 8044
},
"end_address" : "Juhan Liivi 2, 50409 Tartu, Estonia",
"end_location" : {
"lat" : 58.3785389,
"lng" : 26.7146963
},
"start_address" : "J. Sütiste tee 44, 13420 Tallinn, Estonia",
"start_location" : {
"lat" : 59.39577569999999,
"lng" : 24.6861104
},
"steps" : [
{ ... },
{ ... },
{ ... },
{ ... },
{
"distance" : {
"text" : "0.9 km",
"value" : 867
},
"duration" : {
"text" : "2 mins",
"value" : 104
},
"end_location" : {
"lat" : 59.4019886,
"lng" : 24.7108114
},
"html_instructions" : "XXXX",
"maneuver" : "turn-left",
"polyline" : {
"points" : "XXXX"
},
"start_location" : {
"lat" : 59.3943677,
"lng" : 24.708647
},
"travel_mode" : "DRIVING"
},
{ ... },
{ ... },
{ ... },
{ ... },
{ ... },
{ ... },
{ ... },
{ ... },
{ ... }
],
"traffic_speed_entry" : [],
"via_waypoint" : []
}
],
"overview_polyline" : { ... },
"summary" : "Tallinn–Tartu–Võru–Luhamaa/Route 2",
"warnings" : [],
"waypoint_order" : []
}
],
"status" : "OK"
}
(check the attached image)
in red what I'm getting with with bellow command from Regex.named_captures module
%{"duration_text" => duration_text, "duration_value" => duration_value} = Regex.named_captures ~r/duration\D+(?<duration_text>\d+ mins)\D+(?<duration_value>\d+)/, body
in bleu (check the attached image), what I want to extract from body
body is the JSON response of my googleAPI url on a browser
Would you please assist and provide the regex ?
Since http://www.elixre.uk/ is down, i'm cant find any api helping to do that
Thanks in advance
Don't use regexes on a json string. Instead, convert the json string to an elixir map using Jason, Poison, etc., then use the keys in the map to lookup the data you are interested in.
Here's an example:
json_map = Jason.decode!(get_json())
[first_route | _rest] = json_map["routes"]
[first_leg | _rest] = first_route["legs"]
distance = first_leg["distance"]
=> %{"text" => "189 km", "value" => 188507}
Similarly, you can get the other parts with:
duration = first_leg["duration"]
end_address = first_leg["end_address"]
...
...
I have just started using Elastic Search 6 on AWS.
I have inserted data into my ES endpoint but I can only search it using the full sentence and not match individual words. In the past I would have used not_analyzed it seems, but this has been replaced by 'keyword'. However this still doesn't work.
Here is my index:
{
"seven" : {
"aliases" : { },
"mappings" : {
"myobjects" : {
"properties" : {
"id" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"myId" : {
"type" : "text"
},
"myUrl" : {
"type" : "text"
},
"myName" : {
"type" : "keyword"
},
"myText" : {
"type" : "keyword"
}
}
}
},
"settings" : {
"index" : {
"number_of_shards" : "5",
"provided_name" : "seven",
"creation_date" : "1519389595593",
"analysis" : {
"filter" : {
"nGram_filter" : {
"token_chars" : [
"letter",
"digit",
"punctuation",
"symbol"
],
"min_gram" : "2",
"type" : "nGram",
"max_gram" : "20"
}
},
"analyzer" : {
"nGram_analyzer" : {
"filter" : [
"lowercase",
"asciifolding",
"nGram_filter"
],
"type" : "custom",
"tokenizer" : "whitespace"
},
"whitespace_analyzer" : {
"filter" : [
"lowercase",
"asciifolding"
],
"type" : "custom",
"tokenizer" : "whitespace"
}
}
},
"number_of_replicas" : "1",
"uuid" : "_vNXSADUTUaspBUu6zdh-g",
"version" : {
"created" : "6000199"
}
}
}
}
}
I have data like this:
{
"took" : 3,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 13,
"max_score" : 1.0,
"hits" : [
{
"_index" : "seven",
"_type" : "myobjects",
"_id" : "8",
"_score" : 1.0,
"_source" : {
"myUrl" : "https://myobjects.com/wales.gif",
"myText" : "Objects for Welsh Things",
"myName" : "Wales"
}
},
{
"_index" : "seven",
"_type" : "myobjects",
"_id" : "5",
"_score" : 1.0,
"_source" : {
"myUrl" : "https://myobjects.com/flowers.gif",
"myText" : "Objects for Flowery Things",
"myNoun" : "Flowers"
}
}
]
}
}
If I then search for 'Objects' I get nothing. If I search for 'Objects for Flowery Things' I get the single result.
I am using this to search for items :
POST /seven/objects/_search?pretty
{
"query": {
"multi_match" : { "query" : q, "fields": ["myText", "myNoun"], "fuzziness":"AUTO" }
}
}
Can anybody tell me how to have the search match any word in the sentence rather than having to put the whole sentence in the query?
This is because your myName and myText fields are of keyword type:
...
"myName" : {
"type" : "keyword"
},
"myText" : {
"type" : "keyword"
}
...
and because of this they are not analyzed and only full match will work for them. Change the type to text and it should work as you expected:
...
"myName" : {
"type" : "text"
},
"myText" : {
"type" : "text"
}
...
In the parts below, I need to pick out the first entry of the output for each section which in turn is the name of the index for ElasticSearch.
For instance nprod#n_docs, platform-api-stage, nprod#janeuk_classic, nprod#delista.com#1
So I know that they are between patterns of characters like
{ "
and a
: {
"settings" : {
So what would my script look like to grab these values so I can cat them out to another file?
My output looks like:
{
"nprod#n_docs" : {
"settings" : {
"index.analysis.analyzer.rwn_text_analyzer.char_filter" : "html_strip",
"index.analysis.analyzer.rwn_text_analyzer.language" : "English",
"index.translog.disable_flush" : "false",
"index.version.created" : "190199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5",
"index.analysis.analyzer.rwn_text_analyzer.type" : "snowball",
"index.translog.flush_threshold_size" : "60",
"index.translog.flush_threshold_period" : "",
"index.translog.flush_threshold_ops" : "500"
}
},
"platform-api-stage" : {
"settings" : {
"index.analysis.analyzer.api_edgeNGram.type" : "custom",
"index.analysis.analyzer.api_edgeNGram.filter.0" : "api_nGram",
"index.analysis.filter.api_nGram.max_gram" : "50",
"index.analysis.analyzer.api_edgeNGram.filter.1" : "lowercase",
"index.analysis.analyzer.api_path.type" : "custom",
"index.analysis.analyzer.api_path.tokenizer" : "path_hierarchy",
"index.analysis.filter.api_nGram.min_gram" : "2",
"index.analysis.filter.api_nGram.type" : "edgeNGram",
"index.analysis.analyzer.api_edgeNGram.tokenizer" : "standard",
"index.analysis.filter.api_nGram.side" : "front",
"index.analysis.analyzer.api_path.filter.0" : "lowercase",
"index.number_of_shards" : "5",
"index.number_of_replicas" : "1",
"index.version.created" : "200599"
}
},
"nprod#janeuk_classic" : {
"settings" : {
"index.analysis.analyzer.n_text_analyzer.language" : "English",
"index.translog.disable_flush" : "false",
"index.version.created" : "190199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5",
"index.analysis.analyzer.n_text_analyzer.char_filter" : "html_strip",
"index.analysis.analyzer.n_text_analyzer.type" : "snowball",
"index.translog.flush_threshold_size" : "60",
"index.translog.flush_threshold_period" : "",
"index.translog.flush_threshold_ops" : "500"
}
},
"nprod#delista.com#1" : {
"settings" : {
"index.analysis.analyzer.n_text_analyzer.language" : "English",
"index.translog.disable_flush" : "false",
"index.version.created" : "191199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5",
"index.analysis.analyzer.n_text_analyzer.char_filter" : "html_strip",
"index.analysis.analyzer.n_text_analyzer.type" : "snowball",
"index.translog.flush_threshold_size" : "60",
"index.translog.flush_threshold_period" : "",
"index.translog.flush_threshold_ops" : "500"
}
},
That's JSON. Read the data and parse it using JSON::XS.
use JSON::XS qw( decode_json );
my $file;
{
open(my $fh, '<:raw', $qfn)
or die("Can't open \"$qfn\": $!\n");
local $/;
$file = <$fh>;
}
my $data = decode_json($file);
Then, just traverse the tree for the information you want.
my #index_names = keys(%$data);