webservice for autosuggest on city names / postal codes including long-lat coordinates? - web-services

i'm looking for a webservice, to be used for an autocomplete field,
where people can fill in either a postal code / city name or both
this service will need all cities in Europe, so we can use it for all country websites.
and in a later stadium we want to keep the world open for asia and america so this would be a plus.
preferably it would also return the long-lat coordinates for the locations,
Now it is a free textfield, after leaving the field, we hit the google geocoding service,
to find coordinates... preferably i would tie these two together.
so we don't have to query 2 services for one thing.
does anyone know of the existance of such a service online somewhere?
or would you suggest to build our own database with cities / postal codes / coordinates?
if so we would need to get the content from somewhere too, and i was trying to avoid that issue :)

I recently searched for a similar service, in vain.
I wanted my users to have auto-complete on entering a city name, and once a city is chosen I needed to pass the name and lat/long onto the Google API. In the end I did this: -
downloaded the geonames allcountries.zip, full extract: this
Imported it into a SQL DB via SSIS (about 7.5 million records!)
Wrote a simple query to extract just the cities (only the PPLC, PPLA and PPLA2 records).
This left me with a manageable table of 9112 records (with lat / long and country code) which covers all the cities in the world. I then wrote my own code to query the data.
Not ideal, but I needed a solution.

I know this post is very old but for thouse who are looking for a simple solution that can be integrated in 5 minutes here is the link:
Geocomplete jQuery...
For my case I followed this steps:
1 - Download the plugin from here.
2 - Add the jquery.geocomplete.js or jquery.geocomplete.min.js file into your javascript folder of your project.
3 - Call this file in script tags on the html page where you have the input field that you have to autocomplete with cities:
<script src='/PathToTheFile/jquery.geocomplete.js'></script>
4 - To convert an input into an autocomplete field, simply call the Geocomplete plugin in script tags: <script>
$("#IdOfTheInputField").geocomplete(); // Option 1: Call on element.
$.fn.geocomplete("input"); // Option 2: Pass element as argument.
</script>
5- You can check for the complete list of options on the link provided at the top.
Hope that this helped!

Related

Kibana: can I store "Time" as a variable and run a consecutive search?

I want to automate a few search in one, here are the steps:
Search in Kibana for this ID:"b2c729b5-6440-4829-8562-abd81991e2a0" which will return me a bunch of logs. Of these logs I need to take the first and the last timestamp:
I now would like to store these two data FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524 in 2 variables
Run a second search in Kibana for the word "fail" in between these two variable of time
How to automate the whole process without need of copy/paste and running a second query?
EDIT:
SHORT STORY LONG: I work in a company that produce a software for autonomous vehicles.
SCENARIO: A booking is rejected and we need to understand why.
WHERE IS THE PROBLE: I need to monitor just a few seconds of logs on 3 different machines. Each log is completely separated, there is no relation between the logs so I cannot write a query in discover, I need to run 3 separated queries.
EXAMPLE:
A booking was rejected, so I open Chrome and I search on "elk-prod.myhost.com" for the BookingID:"b2c729b5-6440-4829-8562-abd81991e2a0" and I have a dozen of logs returned during a range of 2 seconds (FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524).
Now I need to know what was happening on the car so I open a new Chrome tab and I search on "elk-prod.myhost.com" for the CarID: "Tesla-45-OU" on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
Now I need to know why the server which calculate the matching rejected the booking so I open a new Chrome tab and I search for the word CalculationMatrix always on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
CONCLUSION: I want to stop to keep opening Chrome tabs by hand and automate the whole thing. I have no idea around what time the book was made so I first need to search for the BookingID "b2c729b5-6440-4829-8562-abd81991e2a0", then store the timestamp of first and last log and run a second and third query based on those timestamps.
There is no relation between the 3 logs I search so there is no way to filter from the Discover, I need to automate 3 different query.
Here is how I would do it. First of all, from what I understand, you have three different indexes:
one for "bookings"
one for "cars"
one for "matchings"
First, in Discover, I would create three Saved Searches, one per index pattern. Then in Visualize, I would create a Vertical bar chart on the bookings saved search (Bucket X-Axis by date_histogram on the timestamp field, leave the rest as is). You'll get a nice histogram of all your booking events bucketed by time.
Finally, I would create a dashboard and add the vertical bar chart + those three saved searches inside it.
When done, the way I would search according to the process you've described above is as follows:
Search for the booking ID b2c729b5-6440-4829-8562-abd81991e2a0 in the top filter bar. In the bar chart histogram (bookings), you will see all documents related to the selected booking. On that chart, you can select the exact period from when the very first booking document happened to the very last. This will adapt the main time picker at the top and the start/end time will be "remembered" by Kibana
Remove the booking ID from the top filter (since we now know the time range and Kibana stores it). Search for Tesla-45-OU in the top filter bar. The bar histogram + the booking saved search + the matchings saved search will be empty, but you'll have data inside the second list, the one for cars. Find whatever you need to find in there and go to the next step.
Remove the car ID from the top filter and search for ComputationMatrix. Now the third saved search is going to show you whatever documents you need to see within that time range.
I'm lacking realistic data to try this out, but I definitely think this is possible as I've laid out above, probably with some adaptations.
Kibana does work like this (any order is ok):
Select time filter: https://www.elastic.co/guide/en/kibana/current/set-time-filter.html
Add additional criteria for search like for example field s is b2c729b5-6440-4829-8562-abd81991e2a0.
Add aditional criteria for search like for example field x is Fail.
Additionaly you can view surrounding documents https://www.elastic.co/guide/en/kibana/current/document-context.html#document-context
This is how Kibana works.
You can prepare some filters beforehands, save them and then use them if you want to automate the process of discovering somehow.
You can do that in Discover tab in Kibana using New/Save/Open options.
Edit:
I do not think you can achieve what you need in Kibana. As I mentioned earlier one option is to change the data that is comming to Elasticsearch so you can search for it via discover in Kibana. Another option could be builiding for example Java application, that is using Elasticsearch - then you can write algorithm that returns the data that you want. But i think it's a big overhead and I recommend checking the data first.
Edit: To clarify - you can create external Java let's say SpringBoot application that uses Elasticsearch - all the data that you need is inside it.
But in this option you will not use Kibana at all.
You can export the result to csv or what you want in the code.
SpringBoot application can ask ElasticSearch for whatever it needs, then it would be easy to store these time variables inside of Java code.
EDIT: After OP edited question to change it dramatically:
#FrancescoMantovani Well the edited version is very different from where you first posted here How to automate the whole process without need of copy/paste and running a second query? and search for word fail in a single shot. In accepted answer you are still using a three filters one at a time so it is not one search, but three.
What's more if you would use one index, and send data from multiple hosts via filebeat you don't even to have to create this dashboard to do that. Then you can you can select the exact period from when the very first document happened to the very last regarding filter and then remove it and add another filter that you need - it's simple as that. Before you were writing about one query,
How to automate the whole process without need of copy/paste and
running a second query?
not three. And you don't need to open new tab in Chrome each time you want to change filter just organize the data by for example using filebeat as mentioned before.
There is no relation between the 3 logs
From what you wrote the realation exist and it is time.
If the data is in for example three diferent indicies (cause documents don't have much similiar data) you can do it like that:
You change them easily in dicover see:
You can go to discover select index 1 search, select time range that you need, when you change index the time range is still the one you selected, you only need to change filter - you will get what you need.

Blue Prism Decision stage to ignore letter capitalization

I am new to Blue Prism and trying to develop a bot that will do some searches on a CRM portal. The bot should search for a certain customer in the web-based CRM app database, and should select the correct one, based on the information provided in an Excel file.
For example, my Excel file has the following information:
Customer name: BLABLA LTD
Contact: test.email#example.com
First Name: John
Last Name: Smith
The bot will use the information in the cells above to perform the search in the web portal, but the web portal contains information which is sometimes in capital letters. I have managed to make the bot go through each element in the webpage that contains a search result, but I want it to click on the one that matches the information above. I used a decision stage, so if the Customer name in the table is the same as in the element, then it will click on it.
The problem is in the table, the text is capitalized, but in the web form it's not, so Blue Prism will consider that the value is different. Is there any way that I can make the bot ignore the capitalization when performing the calculation logic? What I am doing now is adding a new calculation stage to store all the elements in lowercase, and afterwards perform the equality logic between the new lowercase variables, but I was hoping there is an easier way.
You can usually use Lower() or Upper() on both strings, that will make sure the two of them are in the same casing:
You will see the function available under Text if you want to see more details about it.

Query to set a value for all items in Amazon SimpleDB

I am trying to to set a value for all items in a domain that do not already have a certain value and have an additional flag set.
Basically for all my items,
SET ValueA to 100 if ValueB is 0
But I am confused about how to achieve this. So far ive been setting the value for individual items by just using a PutRequest like this:
ArrayList<ReplaceableAttribute> newAttributes = new ArrayList<ReplaceableAttribute>();
newAttributes.add(new ReplaceableAttribute("ValueA",Integer.toString(100), true));
PutAttributesRequest newRequest = new PutAttributesRequest();
newRequest.setDomainName(usersDomain);
newRequest.setItemName(userID);
newRequest.setAttributes(newAttributes);
sdb.putAttributes(newRequest);
This works for an individual item and requires me to first get the item name (userID). Does this means that I have to "list" all of my items and do this 1 by 1?
I suppose that since I have around 19000+ items I would also have to use the token to get the next set after the 2000 limit right?
Isn't there a more efficient way? This might not be so heavy right now but I expect to eventually have over 100k items.
PD: I am using the AWS Java SDK for Eclipse.
If you are talking about how you can do it grammatically by writing your own code then Yes. First you have to know all item name i.e in your case UserID and then you need to set a value one by one. You can use BatchPUTAttribute in this case. Using Batch PUT you can update 25 items in one request. You can do 5 to 20 BatchPutAttribute requests in parallel threads. Know more to tune the performance.
If you need to do it somehow in tricky way then you can use SDBExplorer. Please Remember it will set 100 for all items because SDBExplorer does not support conditional PUT. If you would like to set it anyway then Follow these steps-
Download SDBExplorer zip version form download page.
Extract it and run the executable.
Download 30 days trial license.
Once license has been downloaded main UI will open.
Provide valid Access Key and Secret keys and click on "GO" button.
You will see list of domains in Left side tree.
Right click on the domain in which you would like to set value for all item.
Choose "Export to CSV" option.
Export the content of domain into CSV. http://www.sdbexplorer.com/documentation/simpledb--how-to-export-domain-in-csv-using-sdbexplorer.html
Go to path where your domain has exported.
Open CSV file.
Your first column is item name.
Delete all columns other then item Name and column "ValueA".
Set 100 for all item name under "ValueA" column.
Save the CSV.
Go to the SDBExplorer main UI.
Select the same domain.
Click on "Import" option from tool bar.
A panel will open.
Now Import the data into the Domain. http://www.sdbexplorer.com/documentation/simpledb--how-to-upload-csv-file-data-and-specifying-column-as-amazon-simple-db-item-name.html
Once import is done, explore the domain and you will find the value 100 set to all items for column ValueA.
Please try the steps first on any dummy domain.
What exactly I am trying to suggest you?
To know all item name in your domain, I am suggesting you to export all content of your domain into CSV file at local file system. Once you get all item name in CSV, keep only one column "ValueA". Set "100" for all the items in CSV file and upload/import the content back into domain.
Discloser: I am one of the developer of SDBExplorer.

How to create a report, as a Word document, from data in a Sharepoint list

I have a sharepoint list that is used to record weekly activity, e.g. four columns for weeknumber, projectname, customername, comment
I'd like to be able to generate a report containing all the data for a particular week in the following format
ProjectName 1
Customer 1
Comment 1
Customer 2
Comment 1
ProjectName 2
Customer 3
Comment 1
Customer 4
Comment 1
I can do this by exporting the list to an Excel file and then writing some VBA to generate a Word Document, but I'm wondering if there is any way to cut out the Excel step.
Open Xml ?
I found it quite a steep learning curve to get into, but very powerful. I'd suggest that this is a more elegant approach than vba (in that you are dealing with strongly-typed classes) but not necessarily quicker.
So there's two parts to this
a) Getting the data from SharePoint.
b) Converting it into a Word document.
For a) You will probably end up running this remotly (i.e. not on the sharePoint server) as automating office apps on a server is not recommended - so you should look into the SharePoint Web Services to access your data.
For b) You can use
- Office Automation (via VBA or C#, VB.NET etc)
- Open XML as Pete suggests (Example running in SharePoint)
- A commercial component such as Apose
The last two will allow you to run your code on the SharePoint server
There is nice example on Github (FoodOrder) on how to do it with Templater.
As author I highly recommend it ;)

Instant Video results

I am querying Amazon's Product Advertising API for Instant Video (streaming) results. Everything is working fine -- except that there is some missing information:
Descriptions are not included in results. For example, on Amazon's website the movie "Food, Inc" (http://www.amazon.com/Food-Inc/dp/B002VRZEYM) has the description "An unflattering look inside America's corporate controlled food industry.". When queried via the API, however, no description is returned at all.
Titles of TV shows are not included in results. For example, if you search for the 2nd episode of season 1 of Arrested Development (called "Top Banana") on Amazon's website (http://www.amazon.com/gp/product/B000N2VRJ8), you will get the full name of the TV show, season #, episode #, and episode name. When queried by the API, however, only the episode name is returned.
Does anyone know of a solution to these problems? FYI, the nodeId I am using for my search is 2858778011.
In order to get more details, you'll need to set the ResponseGroup parameter in your request. See the ResponseGroup section of the ItemLookup documentation to see the different Response Groups that you can use.
For example, setting the ResponseGroup parameter to Large or Medium or Small or even ItemAttributes will give you the description:
An unflattering look inside America's corporate controlled food industry.
for Food, Inc (B002VRZEYM) and the Title:
Top Banana
for Arrested Development season 1 episode 2 (B000N2VRJ8).
I had the same problem, while trying to query the Amazon API for Prime Instant Video content. Although this question is kinda old, there are probably some people like me who are interested in a detailed answer, especially for the second part (2.).
Like Jonathan Spooner already said, you have to set a response group
that returns the data you're interested in. Official documentation: Response Groups - Product Advertising API.
In your case, I think, the ResponseGroup Small should do.
If you want to get the title of a TV show, which contains a certain episode, you have to set the response group RelatedItems in your request, too (you can set multiple response groups in one request). You will also have to name a RelationshipType, otherwise the request will fail. For Episode -- Season - Relationships you choose Episode.
With RelatedItems, the result will contain a node named <RelatedItems>. You will find the season item in there, which's title should be something like " Arrested Development - Season 1 [HD]".
Note: If you really just want the TV show title, you could either parse the season name for it or you could make another ItemLookup with the seasons ASIN: set the response group RelatedItems again, but this time with RelationshipType=Season. This will return Season - TV Series - Relationships. The related item will contain the TV Show in general. (But the title could have a suffix like [HD] anyway)
Here you have a list with all relationship types: Relationship Types - Product Advertising API