Getting speed limit Advanced Datasets API(aka PDE) is not working well - geocoding

https://knowledge.here.com/csm_kb?id=public_kb_csm_details&number=KB0017817
I've referenced this doc for getting the speed limit but it's not working well in a specific location. I'm not sure if I'm doing right.
For latitude: 34.9531064, longitude: -82.4189515, I was able to get 33712897 for ReferenceId using this api. https://reverse.geocoder.ls.hereapi.com/6.2/reversegeocode.json?prox=34.97147,-104.89752&mode=retrieveAddresses&maxresults=1&apiKey={{YOUR_API_KEY}}&locationattributes=linkInfo
tile size = 180° / 2^level [degree] tileY = trunc((latitude + 90°) / tile size) tileX = trunc((longitude + 180°) / tile size)
And using this formula, I am able to get 277 for tileX and 355 for tileY in case the level is 9.
But after calling https://pde.api.here.com/1/tiles.json?layers=SPEED_LIMITS_FC1&levels=9&tilexy=213,355&app_id={{YOUR_APP_ID}}&app_code={{YOUR_APP_CODE}}&meta=1&callback=onLoadPDETiles, I cannot get 33712897 ReferenceId in the response. So the result is I cannot get speed limit of that specific location.
What did I do wrong?

The way you're constructing your last request will not work because you forgot to consider the Functional Class of the link. Because of this, the layers, level and tilexy parameters are not correct.
The linkInfo object in the Reverse Geocoding response indicates that link 33712897 has a Functional Class = 5, so you want to call layer SPEED_LIMITS_FC5 isntead of SPEED_LIMITS_FC1. Also, according to the documentation available here, you should be using level=13:
For road link based layers, the level is always "road functional class" + 8
This means that your calculated tiles will be 4441,5686, and your request will look like this:
https://pde.api.here.com/1/tiles.json?
layers=SPEED_LIMITS_FC5&
levels=13&
tilexy=4441,5686&
app_id={{YOUR_APP_ID}}&
app_code={{YOUR_APP_CODE}}&
meta=1
Now, this request will still return an empty result because the link you chose doesn't have a Speed Limit in the HERE map, but at least your request is properly structured now. For example, if you change your coordinates to 32.705470,-96.784640 for link 17748385 (tilexy=3787,5584) using the exact same request structure, you will get a non-empty result.

If you want to check speed limit using coordinates, we would suggest to use Route Matching instead. Take these steps:
Compose a trace with coordinates, if for example you can compose it in csv format as below:
latitude,longitude
34.9531064,-82.4189515
Encode it into base64 format, you will get:
bGF0aXR1ZGUsbG9uZ2l0dWRlCjM0Ljk1MzEwNjQsLTgyLjQxODk1MTU=
Pass it to Route Matching API, such as:
https://m.fleet.ls.hereapi.com/2/matchroute.json?file=bGF0aXR1ZGUsbG9uZ2l0dWRlCjM0Ljk1MzEwNjQsLTgyLjQxODk1MTU=&attributes=SPEED_LIMITS_FCn(*)&apiKey=YOUR_API_KEY
Note attributes=SPEED_LIMITS_FCn(*) means to get all attributes of SPEED_LIMITS_FCn table, which means FC1-5.
Then you will notice you get nothing about speed limits because the location 34.9531064,-82.4189515 is located near a FC5 road which has no speed limits.
You can try a new location, such as 34.962142745546274,-82.4333132247333 which is on a highway, then you will get speed limits:
https://m.fleet.ls.hereapi.com/2/matchroute.json?file=bGF0aXR1ZGUsbG9uZ2l0dWRlCjM0Ljk2MjE0Mjc0NTU0NjI3NCwtODIuNDMzMzEzMjI0NzMzMw==&attributes=SPEED_LIMITS_FCn(*)&apiKey=YOUR_API_KEY

Related

How to get y axis range in Stata

Suppose I am using some twoway graph command in Stata. Without any action on my part Stata will choose some reasonable values for the ranges of both y and x axes, based both upon the minimum and maximum y and x values in my data, but also upon some algorithm that decides when it would be prettier for the range to extend instead to a number like '0' instead of '0.0139'. Wonderful! Great.
Now suppose that after (or while) I draw my graph, I want to slap some very important text onto it, and I want to be choosy about precisely where the text appears. Having the minimum and maximum values of the displayed axes would be useful: how can I get these min and max numbers? (Either before or while calling the graph command.)
NB: I am not asking how to set the y or x axis ranges.
Since this issue has been a bit of a headache for me for quite some time and I believe there is no good solution out there yet I wanted to write up two ways in which I was able to solve a similar problem to the one described in the post. Specifically, I was able to solve the issue of gray shading for part of the graph using these.
Define a global macro in the code generating the axis labels This is the less elegant way to do it but it works well. Locate the tickset_g.class file in your ado path. The graph twoway command uses this to draw the axes of any graph. There, I defined a global macro in the draw program that takes the value of the omin and omax locals after they have been set to the minimum between the axis range and data range (the command that does this is local omin = min(.scale.min,omin) and analogously for the max), since the latter sometimes exceeds the former. You could also define the global further up in that code block to only get the axis extent. You can then access the axis range using the globals after the graph command (and use something like addplot to add to the previously drawn graph). Two caveats for this approach: using global macros is, as far as I understand, bad practice and can be dangerous. I used names I was sure wouldn't be included in any program with the prefix userwritten. Also, you may not have administrator privileges that allow you to alter this file based on your organization's decisions. However, it is the simpler way. If you prefer a more elegant approach along the lines of what Nick Cox suggested, then you can:
Use the undocumented gdi natscale command to define your own axis labels The gdi commands are the internal commands that are used to generate what you see as graph output (cf. https://www.stata.com/meeting/dcconf09/dc09_radyakin.pdf). The tickset_g.class uses the gdi natscale command to generate the nice numbers of the axes. Basic documentation is available with help _natscale, basically you enter the minimum and maximum, e.g. from a summarize return, and a suggested number of steps and the command returns a min, max, and delta to be used in the x|ylabel option (several possible ways, all rather straightforward once you have those numbers so I won't spell them out for brevity). You'd have to adjust this approach in case you use some scale transformation.
Hope this helps!
I like Nick's suggestion, but if you're really determined, it seems that you can find these values by inspecting the output after you set trace on. Here's some inefficient code that seems to do exactly what you want. Three notes:
when I import the log file I get this message:
Note: Unmatched quote while processing row XXXX; this can be due to a formatting problem in the file or because a quoted data element spans multiple lines. You should carefully inspect your data after importing. Consider using option bindquote(strict) if quoted data spans multiple lines or option bindquote(nobind) if quotes are not used for binding data.
Sometimes the data fall outside of the min and max range values that are chosen for the graph's axis labels (but you can easily test for this).
The log linesize is actually important to my code below because the key values must fall on the same line as the strings that I use to identify the helpful rows.
* start a log (critical step for my solution)
cap log close _all
set linesize 255
log using "log", replace text
* make up some data:
clear
set obs 3
gen xvar = rnormal(0,10)
gen yvar = rnormal(0,.01)
* turn trace on, run the -twoway- call, and then turn trace off
set trace on
twoway scatter yvar xvar
set trace off
cap log close _all
* now read the log file in and find the desired info
import delimited "log.log", clear
egen my_string = concat(v*)
keep if regexm(my_string,"forvalues yf") | regexm(my_string,"forvalues xf")
drop if regexm(my_string,"delta")
split my_string, parse("=") gen(new)
gen axis = "vertical" if regexm(my_string,"yf")
replace axis = "horizontal" if regexm(my_string,"xf")
keep axis new*
duplicates drop
loc my_regex = "(.*[0-9]+)\((.*[0-9]+)\)(.*[0-9]+)"
gen min = regexs(1) if regexm(new3,"`my_regex'")
gen delta = regexs(2) if regexm(new3,"`my_regex'")
gen max_temp= regexs(3) if regexm(new3,"`my_regex'")
destring min max delta , replace
gen max = min + delta* int((max_temp-min)/delta)
*here is the info you want:
list axis min delta max

userWarning pymc3 : What does reparameterize mean?

I built a pymc3 model using the DensityDist distribution. I have four parameters out of which 3 use Metropolis and one uses NUTS (this is automatically chosen by the pymc3). However, I get two different UserWarnings
1.Chain 0 contains number of diverging samples after tuning. If increasing target_accept does not help try to reparameterize.
MAy I know what does reparameterize here mean?
2. The acceptance probability in chain 0 does not match the target. It is , but should be close to 0.8. Try to increase the number of tuning steps.
Digging through a few examples I used 'random_seed', 'discard_tuned_samples', 'step = pm.NUTS(target_accept=0.95)' and so on and got rid of these user warnings. But I couldn't find details of how these parameter values are being decided. I am sure this might have been discussed in various context but I am unable to find solid documentation for this. I was doing a trial and error method as below.
with patten_study:
#SEED = 61290425 #51290425
step = pm.NUTS(target_accept=0.95)
trace = sample(step = step)#4000,tune = 10000,step =step,discard_tuned_samples=False)#,random_seed=SEED)
I need to run these on different datasets. Hence I am struggling to fix these parameter values for each dataset I am using. Is there any way where I give these values or find the outcome (if there are any user warnings and then try other values) and run it in a loop?
Pardon me if I am asking something stupid!
In this context, re-parametrization basically is finding a different but equivalent model that it is easier to compute. There are many things you can do depending on the details of your model:
Instead of using a Uniform distribution you can use a Normal distribution with a large variance.
Changing from a centered-hierarchical model to a
non-centered
one.
Replacing a Gaussian with a Student-T
Model a discrete variable as a continuous
Marginalize variables like in this example
whether these changes make sense or not is something that you should decide, based on your knowledge of the model and problem.

SegNet results of train set (test via test_segmentation.py)

I run SegNet on my own dataset (by Segnet tutorial). I see great results via test_segmentation.py.
my problem is that I want to see the real net results and not test_segmentation own colorisation (via classes).
for example, if I have trained net with 2 classes, so after the train I will see not only 2 colors (as we see with the classes), but we will see the real net color segmentation ([0.22,0.19,0.3....) lighter and darker as the net see it]
I hope that I explained myself well. thanks for helping.
You could use a python script to achieve what you want. Take a look at this script.
The command out = out['argmax'], extracts the raw output, so you can get a segmentation map with 'lighter and darker' values as you wanted.
When you say the 'real' net color segmentation I will assume that you mean the probability maps. Effectively the last layer will have one map for every class; and if you check the function predict in inference.py, they take the argmax; that is the channel (which represents the class) with the highest probability. If you want to get these maps, you just have to get the data without computing the argmax; something like:
predicted = net.blobs['prob'].data
I solve it. the solution is to range cmin and cmax from 0 to 1 in the scipy saving method. for example: scipy.misc.toimage(output, cmin=0.0, amax=1).save(/path/.../image.png)

Leaflet - Custom Map does not load - Wrong File Size?

Something like "OpenStreetMaps" works fine!
Everything is ok, Zoom and Marker works, small Code...
But when i add my own Url, the Tiles are get not loaded !
So i think there is something wrong with my Image Size ? Pixel ?
Leaflet Tutorial just say "add a Map Url"
Look in my Code there is a Notice with Examples,
i think this is not a "coding" Error, more like a File Error.
I hope somebody has a idea :-)
http://hizi.xyz/Map/
You are using http://www.hizi.xyz/Map/{z}/{x}/{y}.png as your base URL, but your tiles are actually at http://www.hizi.xyz/Map/tiles/{z}/{x}/{y}.png.
At the zoom level you've initialized the map (11), you will not be able to see your custom tiles anyway, so you should set maxZoom: 5 in the map options. Also, your tiles only cover the northwestern quadrant of the map (from 0-85 degrees north and 0-180 degrees west), so you will either want to restrict panning within those bounds by setting the maxBounds option, or (if you actually want the map to be global) modify the directory structure by lowering the index for each of the {z} directories by 1. The following code should work to load your map as it is and prevent users from getting outside the boundaries of your map tiles:
var southWest = L.latLng(0, -180),
northEast = L.latLng(85, 0),
bounds = L.latLngBounds(southWest, northEast);
var map = L.map('map', {maxZoom: 5, maxBounds: bounds}).setView([42.5, -90], 3);
L.tileLayer('http://www.hizi.xyz/Map/tiles/{z}/{x}/{y}.png').addTo(map);
If, for aesthetic reasons, you want to keep the map from wrapping at the world boundaries, you may also want to set noWrap: true in the tileLayer options, i.e.:
L.tileLayer('http://www.hizi.xyz/Map/tiles/{z}/{x}/{y}.png', {noWrap: true}).addTo(map);
Here is an example fiddle that shows it working:
http://jsfiddle.net/nathansnider/6qeL29sm/

How to normalize sequence of numbers?

I am working user behavior project. Based on user interaction I have got some data. There is nice sequence which smoothly increases and decreases over the time. But there are little discrepancies, which are very bad. Please refer to graph below:
You can also find data here:
2.0789 2.09604 2.11472 2.13414 2.15609 2.17776 2.2021 2.22722 2.25019 2.27304 2.29724 2.31991 2.34285 2.36569 2.38682 2.40634 2.42068 2.43947 2.45099 2.46564 2.48385 2.49747 2.49031 2.51458 2.5149 2.52632 2.54689 2.56077 2.57821 2.57877 2.59104 2.57625 2.55987 2.5694 2.56244 2.56599 2.54696 2.52479 2.50345 2.48306 2.50934 2.4512 2.43586 2.40664 2.38721 2.3816 2.36415 2.33408 2.31225 2.28801 2.26583 2.24054 2.2135 2.19678 2.16366 2.13945 2.11102 2.08389 2.05533 2.02899 2.00373 1.9752 1.94862 1.91982 1.89125 1.86307 1.83539 1.80641 1.77946 1.75333 1.72765 1.70417 1.68106 1.65971 1.64032 1.62386 1.6034 1.5829 1.56022 1.54167 1.53141 1.52329 1.51128 1.52125 1.51127 1.50753 1.51494 1.51777 1.55563 1.56948 1.57866 1.60095 1.61939 1.64399 1.67643 1.70784 1.74259 1.7815 1.81939 1.84942 1.87731
1.89895 1.91676 1.92987
I would want to smooth out this sequence. The technique should be able to eliminate numbers with characteristic of X and Y, i.e. error in mono-increasing or mono-decreasing.
If not eliminate, technique should be able to shift them so that series is not affected by errors.
What I have tried and failed:
I tried to test difference between values. In some special cases it works, but for sequence as presented in this the distance between numbers is not such that I can cut out errors
I tried applying a counter, which is some X, then only change is accepted otherwise point is mapped to previous point only. Here I have great trouble deciding on value of X, because this is based on user-interaction, I am not really controller of it. If user interaction is such that its plot would be a zigzag pattern, I am ending up with 'no user movement data detected at all' situation.
Please share the techniques that you are aware of.
PS: Data made available in this example is a particular case. There is no typical pattern in which numbers are going to occure, but we expect some range to be continuous with all the examples. Solution I am seeking is generic.
I do not know how much effort you want to involve in this problem but if you want theoretical guaranties,
topological persistence seems well adapted to your problem imho.
Basically with that method, you can filtrate local maximum/minimum by fixing a scale
and there are theoritical proofs that says that if you sampling is
close from your function, then you extracts correct number of maximums with persistence.
You can see these slides (mainly pages 7-9 to get the idea) to get an idea of the method.
Basically, if you take your points as a landscape and imagine a watershed starting from maximum height and decreasing, you have some picks.
Every pick has a time where it is born which is the time where it becomes emerged and a time where it dies which is when it merges with an higher pick. Now a persistence diagram pictures a point for every pick where its x/y coordinates are its time of birth/death (by assumption the first pick does not die and is not shown).
If a pick is a global maximal, then it will be further from the diagonal in the persistence diagram than a local maximum pick. To remove local maximums you have to remove picks close to the diagonal. There are fours local maximums in your example as you can see with the persistence diagram of your data (thanks for providing the data btw) and two global ones (the first pick is not pictured in a persistence diagram):
If you noise your data like that :
You will still get a very decent persistence diagram that will allow you to filter local maximum as you want :
Please ask if you want more details or references.
Since you can not decide on a cut off frequency, and not even on the filter you want to use, I would implement several, and let the user set the parameters.
The first thing that I thought of is running average, and you can see that there are so many things to set, to get different outputs.