Get countries (in multiple languages) based on the given ISO - country-codes

I´m building a django application and I try to get a list of countries (which can also contain countries in multiple languages) based on the iso given as the input. Let´s consider the following scenario:
"iso": "AU",
"countries": ["Croatia","Egypt","Australia","Griechenland","Australien","Ausztrália"]
In the given case, the result should be:
"matched_countries": ["Australia","Australien","Ausztrália"]
I can think of a solution where having a map of all countries and their corresponding ISO codes. But, how can I do this in all languages included? Manual selection of all names would be a though work; is there any tool that could be used for achieving this?

You may find this useful:
https://stefangabos.github.io/world_countries/
Download country info in JSON and make a function which searches inside JSON and returns needed countries.
You may want to manipulate the JSON so that querying and returning data is as easy as possible. For example, you could restructure it so that at the first level the ISO code is a key and value is a dictionary of countries:
{
"af": {
"ar": "أفغانستان",
"bg": "Афганистан",
"cs": "Afghánistán",
"da": "Afghanistan",
"de": "Afghanistan",
"el": "Αφγανιστάν",
"en": "Afghanistan",
"es": "Afganistán",
"et": "Afganistan",
"eu": "Afganistan",
"fi": "Afganistan",
"fr": "Afghanistan",
"hu": "Afganisztán",
"it": "Afghanistan",
"ja": "アフガニスタン",
"ko": "아프가니스탄",
"lt": "Afganistanas",
"nl": "Afghanistan",
"no": "Afghanistan",
"pl": "Afganistan",
"pt": "Afeganistão",
"ro": "Afganistan",
"ru": "Афганистан",
"sk": "Afganistan",
"sv": "Afghanistan",
"th": "อัฟกานิสถาน",
"uk": "Афганістан",
"zh": "阿富汗",
"zh-tw": "阿富汗"
},
"al": {
"ar": "ألبانيا",
"bg": "Албания",
"cs": "Albánie",
"da": "Albanien",
"de": "Albanien",
"el": "Αλβανία",
"en": "Albania",
"es": "Albania",
"et": "Albaania",
"eu": "Albania",
"fi": "Albania",
"fr": "Albanie",
"hu": "Albánia",
"it": "Albania",
"ja": "アルバニア",
"ko": "알바니아",
"lt": "Albanija",
"nl": "Albanië",
"no": "Albania",
"pl": "Albania",
"pt": "Albânia",
"ro": "Albania",
"ru": "Албания",
"sk": "Albánsko",
"sv": "Albanien",
"th": "แอลเบเนีย",
"uk": "Албанія",
"zh": "阿尔巴尼亚",
"zh-tw": "阿爾巴尼亞"
}
}
Writing functions returning desired values is quite straightforward:
import json
cfile = open('countries.json', 'r', encoding="utf8") # Read country data from text file
country_data = json.load(cfile)
cfile.close()
# Returns all countries
def return_countries(iso): # Use two digit ISO code
return country_data[iso]
# Returns country name in [lang] language
def return_country(iso, lang): # Use two digit ISO code and desired language
return country_data[iso][lang]
Examples of outputs:
Function return_countries:
print(return_countries('af'))
Output:
{'ar': 'أفغانستان', 'bg': 'Афганистан', 'cs': 'Afghánistán', 'da': 'Afghanistan', 'de': 'Afghanistan', 'el': 'Αφγανιστάν', 'en': 'Afghanistan', 'es': 'Afganistán', 'et': 'Afganistan', 'eu': 'Afganistan', 'fi': 'Afganistan', 'fr': 'Afghanistan', 'hu': 'Afganisztán', 'it': 'Afghanistan', 'ja': 'アフガニスタン', 'ko': '아프가니스탄', 'lt': 'Afganistanas', 'nl': 'Afghanistan', 'no': 'Afghanistan', 'pl': 'Afganistan', 'pt': 'Afeganistão', 'ro': 'Afganistan', 'ru': 'Афганистан', 'sk': 'Afganistan', 'sv': 'Afghanistan', 'th': 'อัฟกานิสถาน', 'uk': 'А Афганістан', 'zh': '阿富汗', 'zh-tw': '阿富汗'}
Function return_country:
print(return_country('af', 'fi'))
Output:
Afganistan

Related

Get Dimensions for USAGE_TYPE AWS Boto3 CostExplorer Client

I'm trying to get Costs using CostExplorer Client in boto3. But I can't find the values to use as a Dimension filter. The documentation says that we can extract those values from GetDimensionValues but how do I use GetDimensionValues.
response = client.get_cost_and_usage(
TimePeriod={
'Start': str(start_time).split()[0],
'End': str(end_time).split()[0]
},
Granularity='DAILY',
Filter = {
'Dimensions': {
'Key':'USAGE_TYPE',
'Values': [
'DataTransfer-In-Bytes'
]
}
},
Metrics=[
'NetUnblendedCost',
],
GroupBy=[
{
'Type': 'DIMENSION',
'Key': 'SERVICE'
},
]
)
The boto3 reference for GetDimensionValues has a lot of details on how to use that call. Here's some sample code you might use to print out possible dimension values:
response = client.get_dimension_values(
TimePeriod={
'Start': '2022-01-01',
'End': '2022-06-01'
},
Dimension='USAGE_TYPE',
Context='COST_AND_USAGE',
)
for dimension_value in response["DimensionValues"]:
print(dimension_value["Value"])
Output:
APN1-Catalog-Request
APN1-DataTransfer-Out-Bytes
APN1-Requests-Tier1
APN2-Catalog-Request
APN2-DataTransfer-Out-Bytes
APN2-Requests-Tier1
APS1-Catalog-Request
APS1-DataTransfer-Out-Bytes
.....

#aws/dynamodb-data-mapper: How to write ConditionExpression in QueryOptions filter area?

I would like to fetch data from DynamoDB database by using [AWS-DynamoDB-Data-Mapper][1]. I am using QueryIterator and to provide indexName, limit, filter purpose, I am using [QueryOptions][2]. In QueryOptions, I would like to use filter?: ConditionExpression. Where I would like to use contains() function, but I am not able to form ConditionExpression in the below example -
{
"Id": {"N": "456" },
"ProductCategory": {"S": "Sporting Goods" },
"Price": {"N": "650" }
}
const paginator = dbMapper.query(
Products,
{
Id: 456,
Price: between(500, 1000)
},
{
indexName: 'indexByPriceId',
limit: 10,
scanIndexForward: false,
filter: contains(ProductCategory, 'goods')
}
);
It returns an syntax error in this line filter: contains(ProductCategory, 'goods'), it says ContainsPredicate is not a ConditionExpression.
Please help me out by providing me the ConditionExpression in QueryOptions.
[1]: https://github.com/awslabs/dynamodb-data-mapper-js
[2]: https://awslabs.github.io/dynamodb-data-mapper-js/packages/dynamodb-data-mapper/interfaces/queryoptions.html#projection

Access a list item stored in key value pair inside a list of map

I am using flutter/dart and I have run into following problem.
I have a list of map like this.
var questions = [
{
'questionText': 'What\'s your favorite color?',
'answer': ['Black', 'Red', 'Green', 'White']
},
{
'questionText': 'What\'s your favorite animal?',
'answer': ['Deer', 'Tiger', 'Lion', 'Bear']
},
{
'questionText': 'What\'s your favorite movie?',
'answer': ['Die Hard', 'Due Date', 'Deep Rising', 'Dead or Alive']
},
];
Now suppose I need to get the string Tiger from this list. How do I do that? Dart is seeing this as List<Map<String, Object>> questions
Maybe a more portable way with a function:
String getAnswer(int question, int answer) {
return (questions[question]['answer'] as List<String>)[answer];
}
// Get 'Tiger'
String a = getAnswer(1, 1);
You can convert object in list in following way and then use index to get any value.
var p = questions[1]['answer'] as List<String>;
print(p[1]);

aws lambda returning response card throwing a null fulfillmentState error in Amazon Lex?

I have written this function which returns fine when it is just returning as a String. I have followed the syntax for the response card very closely and it passes my test case in lambda. However when it's called through Lex it throws an error which i'll post below. It says fulfillmentState cannot be null but in the error it throws it shows that it is not null.
I have tried switching the order of the dialogue state and response card, i have tried switching the order of "type" and "fulfillmentState". Function:
def backup_phone(intent_request):
back_up_location = get_slots(intent_request)["BackupLocation"]
phone_os = get_slots(intent_request)["PhoneType"]
try:
from googlesearch import search
except ImportError:
print("No module named 'google' found")
# to search
query = "How to back up {} to {}".format(phone_os, back_up_location)
result_list = []
for j in search(query, tld="com", num=5, stop=5, pause=2):
result_list.append(j)
return {
"dialogAction": {
"fulfilmentState": "Fulfilled",
"type": "Close",
"contentType": "Plain Text",
'content': "Here you go",
},
'responseCard': {
'contentType': 'application/vnd.amazonaws.card.generic',
'version': 1,
'genericAttachments': [{
'title': "Please select one of the options",
'subTitle': "{}".format(query),
'buttons': [
{
"text": "{}".format(result_list[0]),
"value": "test"
},
]
}]
}
}
screenshot of test case passing in lambda: https://ibb.co/sgjC2WK
screenshot of error throw in Lex: https://ibb.co/yqwN42m
Text for the error in Lex:
"An error has occurred: Invalid Lambda Response: Received invalid response from Lambda: Can not construct instance of CloseDialogAction, problem: fulfillmentState must not be null for Close dialog action at [Source: {"dialogAction": {"fulfilmentState": "Fulfilled", "type": "Close", "contentType": "Plain Text", "content": "Here you go"}, "responseCard": {"contentType": "application/vnd.amazonaws.card.generic", "version": 1, "genericAttachments": [{"title": "Please select one of the options", "subTitle": "How to back up Iphone to windows", "buttons": [{"text": "https://www.easeus.com/iphone-data-transfer/how-to-backup-your-iphone-with-windows-10.html", "value": "test"}]}]}}; line: 1, column: 121]"
I fixed the issue by sending everything i was trying to return through a function, and then storing all the info in the correct syntax into an object, which i then returned from the function. Relevant code below:
def close(session_attributes, fulfillment_state, message, response_card):
response = {
'sessionAttributes': session_attributes,
'dialogAction': {
'type': 'Close',
'fulfillmentState': fulfillment_state,
'message': message,
"responseCard": response_card,
}
}
return response
def backup_phone(intent_request):
back_up_location = get_slots(intent_request)["BackupLocation"]
phone_os = get_slots(intent_request)["PhoneType"]
try:
from googlesearch import search
except ImportError:
print("No module named 'google' found")
# to search
query = "How to back up {} to {}".format(phone_os, back_up_location)
result_list = []
for j in search(query, tld="com", num=1, stop=1, pause=1):
result_list.append(j)
return close(intent_request['sessionAttributes'],
'Fulfilled',
{'contentType': 'PlainText',
'content': 'test'},
{'version': 1,
'contentType': 'application/vnd.amazonaws.card.generic',
'genericAttachments': [
{
'title': "{}".format(query.lower()),
'subTitle': "Please select one of the options",
"imageUrl": "",
"attachmentLinkUrl": "{}".format(result_list[0]),
'buttons': [
{
"text": "{}".format(result_list[0]),
"value": "Thanks"
},
]
}
]
}
)

ElasticSearch: Getting old visitor data into an index

I'm learning ElasticSearch in the hopes of dumping my business data into ES and viewing it with Kibana. After a week of various issues I finally have ES and Kibana working (1.7.0 and 4 respectively) on 2 Ubuntu 14.04 desktop machines (clustered).
The issue I'm having now is how to get the data into ES best. The data flow is that I capture the PHP global variables $_REQUEST and $_SERVER for each visit to text file with a unique ID. From there, if they fill in a form I capture that data in a text file also named with that unique ID in a different directory. Then my customers tell me if that form fill was any good with a delay of up to 50 days.
So I'm starting with the visitor data - $_REQUEST and $_SERVER. A lot of it is redundant so I'm really just attempting to capture the timestamp of their arrival, their IP, the IP of the server they visited, the domain they visited, the unique ID, and their User Agent. So I created this mapping:
time_date_mapping = { 'type': 'date_time' }
str_not_analyzed = { 'type': 'string'} # Originally this included 'index': 'not analyzed' as well
visit_mapping = {
'properties': {
'uniqID': str_not_analyzed,
'pages': str_not_analyzed,
'domain': str_not_analyzed,
'Srvr IP': str_not_analyzed,
'Visitor IP': str_not_analyzed,
'Agent': { 'type': 'string' },
'Referrer': { 'type': 'string' },
'Entrance Time': time_date_mapping, # Stored as a Unix timestamp
'Request Time': time_date_mapping, # Stored as a Unix timestamp
'Raw': { 'type': 'string', 'index': 'not_analyzed' },
},
}
I then enter it into ES with:
es.index(
index=Visit_to_ElasticSearch.INDEX,
doc_type=Visit_to_ElasticSearch.DOC_TYPE,
id=self.uniqID,
timestamp=int(math.floor(self._visit['Entrance Time'])),
body=visit
)
When I look at the data in the index on ES only Entrance Time, _id, _type, domain, and uniqID are indexed for searching (according to Kibana). All of the data is present in the document but most of the fields show "Unindexed fields can not be searched."
Additionally, I was attempting to just get a Pie chart of the Agents. But I couldn't figure out to get visualized because no matter what boxes I click on the Agent field is never an option for aggregation. Just mentioned it because it seems the fields which are indexed do show up.
I've attempting to mimic the mapping examples in the elasticsearch.py example which pulls in github. Can someone correct me on how I'm using that map?
Thanks
------------ Mapping -------------
{
"visits": {
"mappings": {
"visit": {
"properties": {
"Agent": {
"type": "string"
},
"Entrance Time": {
"type": "date",
"format": "dateOptionalTime"
},
"Raw": {
"properties": {
"Entrance Time": {
"type": "double"
},
"domain": {
"type": "string"
},
"uniqID": {
"type": "string"
}
}
},
"Referrer": {
"type": "string"
},
"Request Time": {
"type": "string"
},
"Srvr IP": {
"type": "string"
},
"Visitor IP": {
"type": "string"
},
"domain": {
"type": "string"
},
"uniqID": {
"type": "string"
}
}
}
}
}
}
------------- Update and New Mapping -----------
So I deleted the index and recreated it. The original index had some data in it from before I knew anything about mapping the data to specific field types. This seemed to fix the issue with only a few fields being indexed.
However, parts of my mapping appear to be ignored. Specifically the Agent string mapping:
visit_mapping = {
'properties': {
'uniqID': str_not_analyzed,
'pages': str_not_analyzed,
'domain': str_not_analyzed,
'Srvr IP': str_not_analyzed,
'Visitor IP': str_not_analyzed,
'Agent': { 'type': 'string', 'index': 'not_analyzed' },
'Referrer': { 'type': 'string' },
'Entrance Time': time_date_mapping,
'Request Time': time_date_mapping,
'Raw': { 'type': 'string', 'index': 'not_analyzed' },
},
}
Here's the output of http://localhost:9200/visits_test2/_mapping
{
"visits_test2": {
"mappings": {
"visit": {
"properties": {
"Agent":{"type":"string"},
"Entrance Time": {"type":"date","format":"dateOptionalTime"},
"Raw": {
"properties": {
"Entrance Time":{"type":"double"},
"domain":{"type":"string"},
"uniqID":{"type":"string"}
}
},
"Referrer":{"type":"string"},
"Request Time": {"type":"date","format":"dateOptionalTime"},
"Srvr IP":{"type":"string"},
"Visitor IP":{"type":"string"},
"domain":{"type":"string"},
"uniqID":{"type":"string"}
}
}
}
}
}
Note that I've used an entirely new index. The reason being that I wanted to make to sure nothing was carrying over from one to the next.
Note that I'm using the Python library elasticsearch.py and following their examples for mapping syntax.
--------- Python Code for Entering Data into ES, per comment request -----------
Below is a file name mapping.py, I have not yet fully commented the code since this was just code to test whether this method of data entry into ES was viable. If it is not self-explanatory, let me know and I'll add additional comments.
Note, I programmed in PHP for years before picking up Python. In order to get up and running faster with Python I created a couple of files with basic string and file manipulation functions and made them into a package. They are written in Python and meant to mimic the behavior of a built-in PHP function. So when you see a call to php_basic_* it is one of those functions.
# Standard Library Imports
import json, copy, datetime, time, enum, os, sys, numpy, math
from datetime import datetime
from enum import Enum, unique
from elasticsearch import Elasticsearch
# My Library
import basicconfig, mybasics
from mybasics.cBaseClass import BaseClass, BaseClassErrors
from mybasics.cHelpers import HandleErrors, LogLvl
# This imports several constants, a couple of functions, and a helper class
from basicconfig.startup_config import *
# Connect to ElasticSearch
es = Elasticsearch([{'host': 'localhost', 'port': '9200'}])
# Create mappings of a visit
time_date_mapping = { 'type': 'date_time' }
str_not_analyzed = { 'type': 'string'} # This originally included 'index': 'not_analyzed' as well
visit_mapping = {
'properties': {
'uniqID': str_not_analyzed,
'pages': str_not_analyzed,
'domain': str_not_analyzed,
'Srvr IP': str_not_analyzed,
'Visitor IP': str_not_analyzed,
'Agent': { 'type': 'string', 'index': 'not_analyzed' },
'Referrer': { 'type': 'string' },
'Entrance Time': time_date_mapping,
'Request Time': time_date_mapping,
'Raw': { 'type': 'string', 'index': 'not_analyzed' },
'Pages': { 'type': 'string', 'index': 'not_analyzed' },
},
}
class Visit_to_ElasticSearch(object):
"""
"""
INDEX = 'visits'
DOC_TYPE = 'visit'
def __init__(self, fname, index=True):
"""
"""
self._visit = json.loads(php_basic_files.file_get_contents(fname))
self._pages = self._visit.pop('pages')
self.uniqID = self._visit['uniqID']
self.domain = self._visit['domain']
self.entrance_time = self._convert_time(self._visit['Entrance Time'])
# Get a list of the page IDs
self.pages = self._pages.keys()
# Extra IPs and such from a single page
page = self._pages[self.pages[0]]
srvr = page['SERVER']
req = page['REQUEST']
self.visitor_ip = srvr['REMOTE_ADDR']
self.srvr_ip = srvr['SERVER_ADDR']
self.request_time = self._convert_time(srvr['REQUEST_TIME'])
self.agent = srvr['HTTP_USER_AGENT']
# Now go grab data that might not be there...
self._extract_optional()
if index is True:
self.index_with_elasticsearch()
def _convert_time(self, ts):
"""
"""
try:
dt = datetime.fromtimestamp(ts)
except TypeError:
dt = datetime.fromtimestamp(float(ts))
return dt.strftime('%Y-%m-%dT%H:%M:%S')
def _extract_optional(self):
"""
"""
self.referrer = ''
def index_with_elasticsearch(self):
"""
"""
visit = {
'uniqID': self.uniqID,
'pages': [],
'domain': self.domain,
'Srvr IP': self.srvr_ip,
'Visitor IP': self.visitor_ip,
'Agent': self.agent,
'Referrer': self.referrer,
'Entrance Time': self.entrance_time,
'Request Time': self.request_time,
'Raw': self._visit,
'Pages': php_basic_str.implode(', ', self.pages),
}
es.index(
index=Visit_to_ElasticSearch.INDEX,
doc_type=Visit_to_ElasticSearch.DOC_TYPE,
id=self.uniqID,
timestamp=int(math.floor(self._visit['Entrance Time'])),
body=visit
)
es.indices.create(
index=Visit_to_ElasticSearch.INDEX,
body={
'settings': {
'number_of_shards': 5,
'number_of_replicas': 1,
}
},
# ignore already existing index
ignore=400
)
In case it matters this is the simple loop I use to dump the data into ES:
for f in all_files:
try:
visit = mapping.Visit_to_ElasticSearch(f)
except IOError:
pass
where all_files is a list of all the visit files (full path) I have in my test data set.
Here is a sample visit file from a Google Bot visit:
{u'Entrance Time': 1407551587.7385,
u'domain': u'############',
u'pages': {u'6818555600ccd9880bf7acef228c5d47': {u'REQUEST': [],
u'SERVER': {u'DOCUMENT_ROOT': u'/var/www/####/',
u'Entrance Time': 1407551587.7385,
u'GATEWAY_INTERFACE': u'CGI/1.1',
u'HTTP_ACCEPT': u'*/*',
u'HTTP_ACCEPT_ENCODING': u'gzip,deflate',
u'HTTP_CONNECTION': u'Keep-alive',
u'HTTP_FROM': u'googlebot(at)googlebot.com',
u'HTTP_HOST': u'############',
u'HTTP_IF_MODIFIED_SINCE': u'Fri, 13 Jun 2014 20:26:33 GMT',
u'HTTP_USER_AGENT': u'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
u'PATH': u'/usr/local/bin:/usr/bin:/bin',
u'PHP_SELF': u'/index.php',
u'QUERY_STRING': u'',
u'REDIRECT_SCRIPT_URI': u'http://############/',
u'REDIRECT_SCRIPT_URL': u'############',
u'REDIRECT_STATUS': u'200',
u'REDIRECT_URL': u'############',
u'REMOTE_ADDR': u'############',
u'REMOTE_PORT': u'46271',
u'REQUEST_METHOD': u'GET',
u'REQUEST_TIME': u'1407551587',
u'REQUEST_URI': u'############',
u'SCRIPT_FILENAME': u'/var/www/PIAN/index.php',
u'SCRIPT_NAME': u'/index.php',
u'SCRIPT_URI': u'http://############/',
u'SCRIPT_URL': u'/############/',
u'SERVER_ADDR': u'############',
u'SERVER_ADMIN': u'admin#############',
u'SERVER_NAME': u'############',
u'SERVER_PORT': u'80',
u'SERVER_PROTOCOL': u'HTTP/1.1',
u'SERVER_SIGNATURE': u'<address>Apache/2.2.22 (Ubuntu) Server at ############ Port 80</address>\n',
u'SERVER_SOFTWARE': u'Apache/2.2.22 (Ubuntu)',
u'uniqID': u'bbc398716f4703cfabd761cc8d4101a1'},
u'SESSION': {u'Entrance Time': 1407551587.7385,
u'uniqID': u'bbc398716f4703cfabd761cc8d4101a1'}}},
u'uniqID': u'bbc398716f4703cfabd761cc8d4101a1'}
Now I understand better why the Raw field is an object instead of a simple string since it is assigned self._visit which in turn was initialized with json.loads(php_basic_files.file_get_contents(fname)).
Anyway, based on all the information you've given above, my take is that the mapping was never installed via put_mapping. From there on, there's no way anything else can work the way you like. I suggest you modify your code to install the mapping before you index your first visit document.