I have locations coordinates in the form of latitude, longitude such as: 23⁰ 39' 24.8" N & 58⁰ 11' 36.5" E , see the pic below. But in my work place I use ArcGIS and it seems that doesn't support degree (latitude, longitude) form coordinates. I am planning now to write a C++ code to convert degree form to UTM notation, for example 23⁰ 39' 24.8" N & 58⁰ 11' 36.5E" to 2616726 N & 621702 E. I would like to know how can do such conversion?'
PS: E = East, N= North.
Wikipedia explains how to do this. Google earth can use decimal degree notiation.
Edit: looking at your picture i think you want to convert to UTM? Wikipedia also has this formula.
(note: check Wikipedias formulas with some other source before using)
Adding to rve's answer, you can implement the lat/long -> UTM conversion by following the equations on Wikipedia.
Alternatively, you could download and install an open-source geodesy package such as GeographicLib to do the conversion for you.
A third option is to borrow code directly from GeographicLib or another open-source package, such as this navsat_conversions file. Be sure to give credit to the original author(s) if you do this.
Related
I want to solve the following modified transport equation for the calculation of sensitivities:
Within interFoam, I implemented the following code right after solving alphaEqn.H (along with initial fields dAlpha and dU, calculation of dPhi and other modifications):
{
word dAlphaScheme("div(phi,dAlpha)");
word dPhiScheme("div(dPhi,alpha)");
#include "alphaSuSp.H"
surfaceScalarField dAlphaPhi
(
fvc::flux(phi, dAlpha, dAlphaScheme)
+ fvc::flux(dPhi, alpha1, dPhiScheme)
);
MULES::explicitSolve
(
geometricOneField(),
dAlpha,
dAlphaPhi,
Sp,
Su
);
}
It works, but I'm not sure if MULES::explicitSolve is the right way to solve this equation. After a few time steps the solution smears heavily. What would you recommend?
cfd-online is probably your best bet as I mentioned in the comments above. It looks like you're trying to implement an additional scalar transport equation, although it's not clear what physics you're calculating when you say 'sensitivities'.
Take a look at the example here on adding additional scalar transport equations to OpenFOAM solvers: https://openfoamwiki.net/index.php/How_to_add_temperature_to_icoFoam
It is out of date as it is for OpenFOAMv1.7, but the concept is 99.999% the same:
Create the field
Add appropriate transport coefficients to dictionaries
Add appropriate schemes to fvSchemes
Define the transport equation
Use the .solve() function on the transport equation
I have some experiments in mind related to skipgram model. So I have started to study and modify the optimized implementation in tensorflow/models repository in tutorials/embedding/word2vec_kernels.cc. Suddenly I came above the part where corpus subsampling is done.
According to Tomáš Mikolov paper (https://arxiv.org/abs/1310.4546, eq.5), the word should be kept with probability
where t denotes threshold parameter (according to paper chosen as 10^-5), and f(w) frequency of the word w,
but the code in word2vec_kernels.cc is following:
float keep_prob = (std::sqrt(word_freq / (subsample_ * corpus_size_)) + 1) *
(subsample_ * corpus_size_) / word_freq;
which can be transformed into previously presented notation as
What is the motivation behind this change? Is it just to model 'some kind of relation' to corpus size into this formula? Or is it some transformation of the original formula? Was it chosen empirically?
Edit: link to the mentioned file on github
https://github.com/tensorflow/models/blob/master/tutorials/embedding/word2vec_kernels.cc
Okay so I guess that without corpus_size, the graph looks somewhat the same as the original formula. Corpus size adds a relation to the corpus size to the formula and also "it works with the large numbers", so we can compute discard/keep probability without normalizing word frequency to proper distribution.
I am using BUGS software through R for doing bayesian analysis and i utilize ggmcmc package for bayesian inference.
In my recent example i have a whole matrix b of parameters under monitor, with dimensions 5x8. Now if i use straight ahead a plot from the ggmcmc package, the parameters are so many that i cant see a thing in the output posterior plot.
e.g. ggs_histogram
Now plot functions in ggmcmc have a parameter called family and you use this to select a subset of parameters to include in the plot. In the official package page it says that you have to set family equal to a regular expression that matches the parameters you want and its quite easy if you have let's say parameters a,b and you want to plot b(family='b').
Now i want from the b matrix that i mentioned to plot only one column elements , for example b[1,1],b[2,1],b[3,1],...,b[8,1]
So i tried to subset this the usual way ,like family='b[,1]'.
Error in seq.default(mn, mx, by = bw) :
'from' cannot be NA, NaN or infinite
In addition: Warning messages:
1: In min(x) : no non-missing arguments to min; returning Inf
2: In max(x) : no non-missing arguments to max; returning -Inf
Any ideas? Maybe a correct regexp or a ggplot facet_grid dribble?
Eventually, the ggmcmc package official pdf document had all the info i was looking for. I was right about the need of a regular expression and the tutorial of the package was pretty informative about the form the regular expression is expected to have.
So if i wanted let's say to infer about the elements of the first column of the parameters matrix,
family='b\\[.,1\\]'
would do the job just fine. This works on any of the inference functions of ggmcmc package.
I am writing a Python program to generate some maps in Google Earth, I am using a colleague's script written in Perl and I came to a point where there is this Great Circle call:
#r = great_circle_destination($long, $lat, $bearing, $dist);
What is the equivalent for Python? Is there a module like this:
use Math::Trig ':great_cricle';
I'm pretty sure there's no such thing in the standard library. I'm pretty sure there'd be a python GIS library that have similar functions, but there are many different ways to do this calculation depending on which model of the earth you uses (e.g. spherical earth or ellipsoid earth or something more complex), so you probably would want to check out the source code of the Perl module and translate that to python.
If you want to implement it yourself, you might want to look in this page for a formula for Destination point given distance and bearing from start point: http://www.movable-type.co.uk/scripts/latlong.html
It shouldn't be too difficult to translate that formula to python:
R = ... Radius of earth ...
def great_circle_destination(lon1, lat1, bearing, dist):
lat2 = math.asin( math.sin(lat1)*math.cos(dist/R) +
math.cos(lat1)*math.sin(dist/R)*math.cos(bearing) )
lon2 = lon1 + math.atan2(math.sin(bearing)*math.sin(dist/R)*math.cos(lat1),
math.cos(dist/R)-math.sin(lat1)*math.sin(lat2)
return lon2, lat2
For a school project, I have a simple program, which compares 20x20 photos. I put 20 photos, and then i put 21th photo, which is compared to existing 20, and pops up the answer, which photo i did insert (or which one is most similar). The problem is, my teacher wanted me to use nearest neighbour algorithm, so i am counting distance from every photo. I got everything working, but the thing is, if photos are too similar, i got the problem with saying which one is closer to my one. For example i get these distances with 2 different photos (well, they are ALMOST the same):
0 distance: 1353.07982026191
1 distance: 1353.07982026191
It is 15 digits already, and i am using double type. I was reading that long double is the same. Is there any "easy" way to store numbers with more than 15 digits and do math on them?
I count distance using Euclidean distance
I just need to be more precise, or thats limit i probably wont pass here, and i should talk to my teacher i cant compare such similar photos?
I think you need this: gmplib.org
There's a guide how to install this library on this site too.
And here's article about floats: http://gmplib.org/manual/C_002b_002b-Interface-Floats.html#C_002b_002b-Interface-Floats
Maybe you could use an algebraic approach.
Let us assume that you are trying to calcuate if vector x is closer to a or b. What you need to calculate is the sign of
d2(x, a) - d2(x, b)
Which becomes (I'll omit some passages for brevity)
and then
Which only contains differences between values which should be very similar. Summing over such small values should yield a better precision than working on the aggregate.