Nominatim reverse Geocoding not working Highway entrance/exit (motorway_link) - geocoding

Hi I have nominatim installed on a ubuntu VM which I use for reverse geo-lookup. All is going well, but nominatim doesn't return the correct object when I do a reverse lookup for a highway entrance (motorway_link):
I found a github issues thread from 2 years ago providing solutions. One of the solutions being changing the address rank for motorway_links from 27 to 26 and the other one being altering the style import file. None of these seem to work for me. Has anyone had any expierence with this? I'm using the latest version of nominatim.
This is the highway section of my Nominatim/settings/import-full.style file:
{
"keys" : ["highway"],
"values" : {
"no" : "skip",
"turning_circle" : "skip",
"mini_roundabout" : "skip",
"noexit" : "skip",
"crossing" : "skip",
"give_way" : "skip",
"stop" : "skip",
"street_lamp" : "main,with_name",
"traffic_signals" : "main,with_name",
"service" : "main,with_name",
"cycleway" : "main,with_name",
"path" : "main,with_name",
"footway" : "main,with_name",
"steps" : "main,with_name",
"bridleway" : "main,with_name",
"track" : "main,with_name",
"byway": "main,with_name",
"motorway_link" : "main",
"trunk_link" : "main",
"primary_link" : "main",
"secondary_link" : "main",
"tertiary_link" : "main",
"" : "main"
}
Content of the highway section of nominatim/settings/import-address.style:
{
"keys" : ["highway"],
"values" : {
"motorway" : "main",
"trunk" : "main",
"primary" : "main",
"secondary" : "main",
"tertiary" : "main",
"unclassified" : "main",
"residential" : "main",
"living_street" : "main",
"pedestrian" : "main",
"road" : "main",
"service" : "main,with_name",
"cycleway" : "main,with_name",
"path" : "main,with_name",
"footway" : "main,with_name",
"steps" : "main,with_name",
"bridleway" : "main,with_name",
"track" : "main,with_name",
"byway": "main,with_name",
"motorway_link" : "main",
"trunk_link" : "main,with_name",
"primary_link" : "main,with_name",
"secondary_link" : "main,with_name",
"tertiary_link" : "main,with_name"
}
highway section of nominatim/settings/address-levels.json:
"highway" : {
"" : 30,
"service" : 27,
"cycleway" : 27,
"path" : 27,
"footway" : 27,
"steps" : 27,
"bridleway" : 27,
"motorway_link" : 26,
"primary_link" : 27,
"trunk_link" : 27,
"secondary_link" : 27,
"tertiary_link" : 27,
"residential" : 26,
"track" : 26,
"unclassified" : 26,
"tertiary" : 26,
"secondary" : 26,
"primary" : 26,
"living_street" : 26,
"trunk" : 26,
"motorway" : 26,
"pedestrian" : 26,
"road" : 26,
"construction" : 26
},

So I have solved. If you look in the nominatim/settings/env.defaults file, you'll see the following lines of code:
# Configuration file for OSM data import.
# This may either be the name of one of an internal style or point
# to a file with a custom style.
# Internal styles are: admin, street, address, full, extratags
NOMINATIM_IMPORT_STYLE=extratags
This means that you'll need to alter the extratags-import.style file to allow unnamed motorway_links aswell.
So change the highway section in nominatim/settings/extratags-import.style from this:
{
"keys" : ["highway"],
"values" : {
"no" : "skip",
"turning_circle" : "skip",
"mini_roundabout" : "skip",
"noexit" : "skip",
"crossing" : "skip",
"give_way" : "skip",
"stop" : "skip",
"street_lamp" : "main,with_name",
"traffic_signals" : "main,with_name",
"service" : "main,with_name",
"cycleway" : "main,with_name",
"path" : "main,with_name",
"footway" : "main,with_name",
"steps" : "main,with_name",
"bridleway" : "main,with_name",
"track" : "main,with_name",
"byway": "main,with_name",
"motorway_link" : "main,with_name",
"trunk_link" : "main,with_name",
"primary_link" : "main,with_name",
"secondary_link" : "main,with_name",
"tertiary_link" : "main,with_name",
"" : "main"
}
to this:
{
"keys" : ["highway"],
"values" : {
"no" : "skip",
"turning_circle" : "skip",
"mini_roundabout" : "skip",
"noexit" : "skip",
"crossing" : "skip",
"give_way" : "skip",
"stop" : "skip",
"street_lamp" : "main,with_name",
"traffic_signals" : "main,with_name",
"service" : "main,with_name",
"cycleway" : "main,with_name",
"path" : "main,with_name",
"footway" : "main,with_name",
"steps" : "main,with_name",
"bridleway" : "main,with_name",
"track" : "main,with_name",
"byway": "main,with_name",
"motorway_link" : "main",
"trunk_link" : "main",
"primary_link" : "main",
"secondary_link" : "main",
"tertiary_link" : "main",
"" : "main"
}
Altering these lines of code tells nominatim to not skipp unnamed motorway_links during import.
So if you find yourself having the same issue. Do the following things:
Go to the highway section in nominatim/settings/adress-levels.json and change the address rank of 'motorway_link' from 27 to 26
In the highway section of the import style files (import-full.style, import-address.style, extratags-import.style) change "motorway_link" : "main,with_name" to "motorway_link" : "main"
Reimport the data

Related

Unable to execute Dataflow Pipeline : "Failed to create job with prefix beam_bq_job_LOAD_textiotobigquerydataflow"

I'm trying to run a dataflow batch job using template "Text file on Cloud Storage to BigQuery". First three steps are working but in the last stage it is getting failed giving error:
Error message from worker: java.lang.RuntimeException: Failed to create job with prefix beam_bq_job_LOAD_textiotobigquerydataflow0releaser1025091627592969dd_1a449a94623645758e91dcba53a86498_fc44bdad405c2c80860231502c18eb1e_00001_00000, reached max retries: 3, last failed job: { "configuration" : { "jobType" : "LOAD", "labels" : { "beam_job_id" : "2022-11-10_02_06_07-15255037958352274885" }, "load" : { "createDisposition" : "CREATE_IF_NEEDED", "destinationTable" : { "datasetId" : "minerals_test_dataset", "projectId" : "jio-big-data-poc", "tableId" : "mytable01" }, "ignoreUnknownValues" : false, "sourceFormat" : "NEWLINE_DELIMITED_JSON", "useAvroLogicalTypes" : false, "writeDisposition" : "WRITE_APPEND" } }, "etag" : "LHqft9L/H4XBWTNZ7BSRXA==", "id" : "jio-big-data-poc:asia-south1.beam_bq_job_LOAD_textiotobigquerydataflow0releaser1025091627592969dd_1a449a94623645758e91dcba53a86498_fc44bdad405c2c80860231502c18eb1e_00001_00000-2", "jobReference" : { "jobId" : "beam_bq_job_LOAD_textiotobigquerydataflow0releaser1025091627592969dd_1a449a94623645758e91dcba53a86498_fc44bdad405c2c80860231502c18eb1e_00001_00000-2", "location" : "asia-south1", "projectId" : "jio-big-data-poc" }, "kind" : "bigquery#job", "selfLink" : "https://bigquery.googleapis.com/bigquery/v2/projects/jio-big-data-poc/jobs/beam_bq_job_LOAD_textiotobigquerydataflow0releaser1025091627592969dd_1a449a94623645758e91dcba53a86498_fc44bdad405c2c80860231502c18eb1e_00001_00000-2?location=asia-south1", "statistics" : { "creationTime" : "1668074949767", "endTime" : "1668074949869", "startTime" : "1668074949869" }, "status" : { "errorResult" : { "message" : "Provided Schema does not match Table jio-big-data-poc:minerals_test_dataset.mytable01. Cannot add fields (field: marks)", "reason" : "invalid" }, "errors" : [ { "message" : "Provided Schema does not match Table jio-big-data-poc:minerals_test_dataset.mytable01. Cannot add fields (field: marks)", "reason" : "invalid" } ], "state" : "DONE" }, "user_email" : "49449455496-compute#developer.gserviceaccount.com", "principal_subject" : "serviceAccount:49449455496-compute#developer.gserviceaccount.com" }. org.apache.beam.sdk.io.gcp.bigquery.BigQueryHelpers$PendingJob.runJob(BigQueryHelpers.java:200) org.apache.beam.sdk.io.gcp.bigquery.BigQueryHelpers$PendingJobManager.waitForDone(BigQueryHelpers.java:153) org.apache.beam.sdk.io.gcp.bigquery.WriteTables$WriteTablesDoFn.finishBundle(WriteTables.java:378)
I tried running same job with other datasets csv files, also the javascript udf and json schema are according to the documentation, but the job is failing at the same stage. So, what can be the possible solution to this error?
The Json schema you given doesn't matches the BigQuery schema of your table :
"Provided Schema does not match Table jio-big-data-poc:minerals_test_dataset.mytable01. Cannot add fields (field: marks)", "reason" : "invalid" }, "errors" : [ { "message" : "Provided Schema does not match Table jio-big-data-poc:minerals_test_dataset.mytable01. Cannot add fields (field: marks)", "reason" : "invalid" } ]
There is a field called field: marks that seems to not exists in the BigQuery table.
If you update your BigQuery schema to match perfectly with the fields of your input Json line and element, that will solve the issue.

C++ writing to mongo, string fields not working in aggregation pipeline

**
Quick summary: C++ app loading data from SQL server using using OTL4, writing to Mongo using mongocxx bulk_write, the strings seem to getting mangled somehow so they don't work in the aggregation pipeline (but appear fine otherwise).
**
I have a simple Mongo collection which doesn't seem to behave as expected with an aggregation pipeline when I'm projecting multiple fields. It's a trivial document, no nesting, fields are just doubles and strings.
First 2 queries work as expected:
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617 }
> db.TemporaryData.aggregate( [ { $project : { Col1:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "Col1" : 575 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "Col1" : 579 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "Col1" : 616 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "Col1" : 617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "Col1" : 622 }
But then combining doesn't return both the fields as expected.
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1, Col1:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617 }
It seems to be specific to the ParametersId field, for instance if I choose 2 other fields it's OK.
> db.TemporaryData.aggregate( [ { $project : { Col1:1, Col2:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "Col1" : 575, "Col2" : "1101-2" }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "Col1" : 579, "Col2" : "1103-2" }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "Col1" : 616, "Col2" : "1300-3" }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "Col1" : 617, "Col2" : "1300-3" }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "Col1" : 622, "Col2" : "1400-3" }
For some reason when I include ParametersId field, all hell breaks loose in the pipeline:
> db.TemporaryData.aggregate( [ { $project : { ParametersId:1, Col2:1, Col1:1, Col3:1 } } ] )
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "ParametersId" : 526988617, "Col1" : 575 }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "ParametersId" : 526988617, "Col1" : 579 }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "ParametersId" : 526988617, "Col1" : 616 }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "ParametersId" : 526988617, "Col1" : 617 }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "ParametersId" : 526988617, "Col1" : 622 }
DB version and the data:
> db.version()
4.0.2
> db.TemporaryData.find()
{ "_id" : ObjectId("5c28f751a531251fd0007c72"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 575, "Col2" : "1101-2", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c73"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 579, "Col2" : "1103-2", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c74"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 616, "Col2" : "1300-3", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c75"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 36, "Col1" : 617, "Col2" : "1300-3", "Col3" : "CHF" }
{ "_id" : ObjectId("5c28f751a531251fd0007c76"), "CellId" : 998909269, "ParametersId" : 526988617, "Order" : 1, "Col1" : 622, "Col2" : "1400-3", "Col3" : "CHF" }
Update: enquoting the field names makes no difference. I'm typing all the above in the mongo.exe command line, but I see the same behavior in my C++ application with a slightly more complex pipeline (projecting all fields to guarantee order).
This same app is actually creating the data in the first place - does anyone know anything which can go wrong? All using the mongocxx lib.
** update **
Turns out there's something going wrong with my handling of strings. Without the string fields in the data, it's all fine. So I've knackered my strings, somehow, even though they look and behave correctly in other ways they don't play nice with the aggregation pipeline. I'm using mongocxx::collection.bulk_write to write standard std::strings which are being loaded from sql server through the OTL4 header. In-between there's a strncpy_s when they get stored internally. I can't seem to create a simple reproducible example.
Just to be safe that there is no conflict with anything else, try using the projection with a strict formatted json: (add quotes to keys)
db.TemporaryData.aggregate( [ { $project : { "ParametersId":1, "Col1":1 } } ] )
Finally found the issue was corrupt documents, which because I was using bulk_write for the insert were getting into the database but causing this strange behavior. I switched to using insert_many, which threw up the document was corrupt, and then I could track down the bug.
The docs were corrupt because I was writing the same field-value data multiple times, which seems to be break the bsoncxx::builder::stream::document I was using to construct them.

AppSync to DynamoDB update query mapping error

I have the following DynamoDB mapping template, to update an existing DynamoDB item:
{
"version" : "2017-02-28",
"operation" : "UpdateItem",
"key" : {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.application.id),
"tenant": $util.dynamodb.toDynamoDBJson($ctx.identity.claims['http://domain/tenant'])
},
"update" : {
"expression" : "SET #sourceUrl = :sourceUrl, #sourceCredential = :sourceCredential, #instanceSize = :instanceSize, #users = :users",
"expressionNames" : {
"#sourceUrl" : "sourceUrl",
"#sourceCredential" : "sourceCredential",
"#instanceSize" : "instanceSize",
"#users" : "users"
},
"expressionValues" : {
":sourceUrl" : $util.dynamodb.toDynamoDbJson($ctx.args.application.sourceUrl),
":sourceCredential" : $util.dynamodb.toDynamoDbJson($ctx.args.application.sourceCredential),
":instanceSize" : $util.dynamodb.toDynamoDbJson($ctx.args.application.instanceSize),
":users" : $util.dynamodb.toDynamoDbJson($ctx.args.application.users)
}
},
"condition" : {
"expression" : "attribute_exists(#id) AND attribute_exists(#tenant)",
"expressionNames" : {
"#id" : "id",
"#tenant" : "tenant"
}
}
}
But I'm getting the following error:
message: "Unable to parse the JSON document: 'Unrecognized token '$util': was expecting ('true', 'false' or 'null')↵ at [Source: (String)"{↵ "version" : "2017-02-28",↵ "operation" : "UpdateItem",↵ "key" : {↵ "id": {"S":"abc-123"},↵ "tenant": {"S":"test"}↵ },↵ "update" : {↵ "expression" : "SET #sourceUrl = :sourceUrl, #sourceCredential = :sourceCredential, #instanceSize = :instanceSize, #users = :users",↵ "expressionNames" : {↵ "#sourceUrl" : "sourceUrl",↵ "#sourceCredential" : "sourceCredential",↵ "#instanceSize" : "instanceSize",↵ "#users" : "users"↵ }"[truncated 400 chars]; line: 17, column: 29]'"
I've tried removing parts, and it seems to be related to the expressionValues, but I can't see anything wrong with the syntax.
Seems like you misspelled the toDynamoDBJson method
Replace
$util.dynamodb.toDynamoDbJson($ctx.args.application.sourceUrl)
with
$util.dynamodb.toDynamoDBJson($ctx.args.application.sourceUrl)
Note the uppercase B in toDynamoDBJson.

addApi creates it but returns error

I am adding new api to wso2 as described in the wso2 Publisher APIs
The query is the following
http://testapiaddress:9763/publisher/site/blocks/item-add/ajax/add.jag?action=addAPI&name=PhoneVerification&context=/phoneverify&version=1.0&visibility=public&thumbUrl=&description=Verify a phone number&tags=phone,mobile,multimedia&endpointType=nonsecured&tiersCollection=Gold,Bronze&http_checked=http&https_checked=https&uriTemplate-0=/*&default_version_checked=default_version&bizOwner=xx&bizOwnerMail=xx#ee.com&techOwner=xx&techOwnerMail=ggg#ww.com"&endpoint_config={"production_endpoints":{"url":"http://myaccountapi.dev.payoneer.com","config":null},"endpoint_type":"address"}&swagger={"paths" : {"/CheckPhoneNumber?PhoneNumber={number}" : {"get" : {"parameters" : [{"description" : "phone number", "name" : "number", "allowMultiple" : false, "type" : "string", "required" : true, "in" : "path"}], "responses" : {"200" : {}}, "x-auth-type" : "Application%20%26%20Application%20User", "x-throttling-tier" : "Unlimited"}}, "/test" : {}, "/" : {}}, "swagger" : "2.0", "info" : {"title" : "WeatherAPI", "version" : "1.0.0"}}
I can see that the api was created in publisher portal but i get error as a response
{"error" : false}
Their response is very generic, but maybe someone has idea why i get this error.
This is not an error. It says that there is no error.
If there is an error response will be like below:
{"error" : true, "description":"error description"}

Using multiple location fields in ElasticSearch + Django-Haystack

I'm using django-haystack and ElasticSearch to index Stores.
Until now, each store had one lat,long coordinate pair; we had to change this to represent the fact that one store can deliver products to very different regions (disjunct) I've added up to ten locations (lat,long pairs) to them.
When using one location field everything was working fine and I got right results. Now, with multiple location fields, I can't get any results, not even the previuos one, for the same user and store coordinates.
My Index is as following:
class StoreIndex(indexes.SearchIndex,indexes.Indexable):
text = indexes.CharField(document=True, use_template=True,
template_name='search/indexes/store/store_text.txt')
location0 = indexes.LocationField()
location1 = indexes.LocationField()
location2 = indexes.LocationField()
location3 = indexes.LocationField()
location4 = indexes.LocationField()
location5 = indexes.LocationField()
location6 = indexes.LocationField()
location7 = indexes.LocationField()
location8 = indexes.LocationField()
location9 = indexes.LocationField()
def get_model(self):
return Store
def prepare_location0(self, obj):
# If you're just storing the floats...
return "%s,%s" % (obj.latitude, obj.longitude)
# ..... up to prepare_location9
def prepare_location9(self, obj):
# If you're just storing the floats...
return "%s,%s" % (obj.latitude_9, obj.longitude_9)
Is this the correct way to build my index?
From elasticsearch I get this mapping information:
curl -XGET http://localhost:9200/stores/_mapping?pretty=True
{
"stores" : {
"modelresult" : {
"properties" : {
"django_id" : {
"type" : "string"
},
"location0" : {
"type" : "geo_point",
"store" : "yes"
},
"location1" : {
"type" : "geo_point",
"store" : "yes"
},
"location2" : {
"type" : "geo_point",
"store" : "yes"
},
"location3" : {
"type" : "geo_point",
"store" : "yes"
},
"location4" : {
"type" : "geo_point",
"store" : "yes"
},
"location5" : {
"type" : "geo_point",
"store" : "yes"
},
"location6" : {
"type" : "geo_point",
"store" : "yes"
},
"location7" : {
"type" : "geo_point",
"store" : "yes"
},
"location8" : {
"type" : "geo_point",
"store" : "yes"
},
"location9" : {
"type" : "geo_point",
"store" : "yes"
},
"text" : {
"type" : "string",
"analyzer" : "snowball",
"store" : "yes",
"term_vector" : "with_positions_offsets"
}
}
}
}
}
Then, I try to query this way:
sqs0 = SearchQuerySet().dwithin('location0', usuario, max_dist).distance('location0',usuario).using('stores')
where:
usuario is a Point instance representing the user trying to find stores near his position and
max_dist is a D instance.
If I query directly, using curl I got no results, too.
Here is the result of quering using curl with multiple location fields:
$ curl -XGET http://localhost:9200/stores/modelresult/_search?pretty=true -d '{ "query" : { "match_all": {} }, "filter" : {"geo_distance" : { "distance" : "6km", "location0" : { "lat" : -23.5, "lon" : -46.6 } } } } '
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : [ ]
}
}
If comment out the fields location1-9 from the StoreIndex class everything works fine, but if I leave them to get multiple location points, I get no results for the same query (user position). This happens for the same query, in django as directly, using curl. That is, if I have only one location (say location0), both queries returns correct results. With more locations (location0-9), both queries didn't give any results.
Here's the results of quering directly using curl with only one location field:
$ curl -XGET http://localhost:9200/stores/modelresult/_search?pretty=true -d '{ "query" : { "match_all": {} }, "filter" : {"geo_distance" : { "distance" : "6km", "location0" : { "lat" : -23.5, "lon" : -46.6 } } } } '
{
"took" : 3,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 9,
"max_score" : 1.0,
"hits" : [ {
"_index" : "stores",
"_type" : "modelresult",
"_id" : "store.store.110",
"_score" : 1.0, "_source" : {"django_ct": "store.store", "text": "RESULT OF THE SEARCH \n\n", "django_id": "110", "id": "store.store.110", "location0": "-23.4487554,-46.58912"}
},
lot's of results here
]
}
}
Of course, I rebuild_index after any change in StoreIndex.
Any help on how to get multiple location fields working with elasticsearch and django?
PS.: I've cross posted this question on Django-Haystack and ElasticSearch Google Groups.
https://groups.google.com/d/topic/elasticsearch/85fg7vdCBBU/discussion
https://groups.google.com/d/topic/django-haystack/m2A3_SF8-ls/discussion
Thanks in advance
Mário