Netcdf Writing 1d array into 2d array - fortran

I am using code similar to this - Read in coordinate variables to read in latitude and longitude from a netCDF file
call check( nf_inq_varid(ncid, LAT_NAME, lat_varid) )
call check( nf_inq_varid(ncid, LON_NAME, lon_varid) )
! Read the latitude and longitude data.
call check( nf_get_var(ncid, lat_varid, lats) )
call check( nf_get_var(ncid, lon_varid, lons) )
Both lats and lons are 1-Dimensional fortran arrays.
My initial data set is in geographical coordinates and I need to convert this into a rotated lat lon grid and I use the code in this URL Geographical to rotated lat lon grid.
The way this works is the following
For each geographical latitude and longitude I get a rotated latitude.
Ditto for rotated longitude.
Mathematically
f(latitude_geo,longitude_geo) = latitude_rot
f(latitude_geo,longitude_geo) = longitude_rot
So the rotated lat and lon arrays are two dimensional arrays. I want to be able to write the rotated lat and lon arrays back to the original netCDF file using nf_put_vara_real or nf_90_put_vara_real(f77 or f90 does not matter).
How can I do this since the original lat and lon arrays are 1-D arrays ? Is it possible to read the original 1-d arrays as 2-D arrays ?

Your transformation creates 2-dimensional longitude and latitude values, so you need to define x and y as new running indices.
lon(lon) => lon(x,y)
lat(lat) => lat(x,y)
Basically, you describe the lon and lat as variable and add x and y as new dimensions. This way longitude and latitudes are mapped to the correct grid points.
If written correctly to a NetCDF file, it's content should be similar to this one:
netcdf example {
dimensions:
x = 360 ;
y = 180 ;
variables:
float x(x):
x:long_name = "x-dimension" ;
float y(y):
y:long_name = "y-dimension" ;
float lon(y, x) ;
lon:long_name = "longitude" ;
lon:units = "degrees_east" ;
...
float lat(y, x) ;
lat:long_name = "latitude" ;
lat:units = "degrees_north" ;
...
float data(y, x) ;
data:long_name = "your data name" ;
...
}
(Output via ncdump -h example.nc)

Related

How to visualize 2d array of doubles in vtk?

I have 2d array (of size 20x30) of doubles:
double a[20*30];
How to visualize it using VTK? It is extremely difficult to find proper documentation. The closest example which I found is this, however it uses as input 3 unsigned chars which represents color. As I understand I should use vtkScalarsToColors class to somehow map scalars to colors, but I can't figure out how to put everything into single piece of code.
What you probably want to do is to assign scalars to points or cells of a surface or volume mesh. VTK then can take care of the visualization. This is demonstrated in the following example: ScalarBarActor. For the basic usage, follow the Scalars example.
However, you need to provide the suitable mesh yourself to which you want to map the values. From your question, it is not entirely clear what you mean with "how to visualize a 2d array" of values. If you want to assign scalar values in a planar 20x30 grid, you need to first create a surface object (of type vtkPolyData) with triangular or quadrangular cells, and then assign the values to the points of the mesh using surface->GetPointData()->SetScalars(), as demonstrated in the above examples.
Convenient in this case would be the vtkPlaneSource, look here for the corresponding example. The number of grid points you can set by using SetXResolution() or SetYResolution() respectively. (In case this is not clear: vtkPlaneSource inherits vtkPolyDataAlgorithm, to access the underlying vtkPolyData object, use the method GetOutput())
Update: I added sample code that demonstrates the procedure - in python, for better readability.
# This code has been written by normanius under the CC BY-SA 4.0 license.
# License: https://creativecommons.org/licenses/by-sa/4.0/
# Author: normanius: https://stackoverflow.com/users/3388962/normanius
# Date: August 2018
# Reference: https://stackoverflow.com/a/51754466/3388962
import vtk
import numpy as np
###########################################################
# CREATE ARRAY VALUES
###########################################################
# Just create some fancy looking values for z.
n = 100
m = 50
xmin = -1; xmax = 1
ymin = -1; ymax = 1
x = np.linspace(xmin, xmax, n)
y = np.linspace(ymin, ymax, m)
x, y = np.meshgrid(x, y)
x, y = x.flatten(), y.flatten()
z = (x+y)*np.exp(-3.0*(x**2+y**2))
###########################################################
# CREATE PLANE
###########################################################
# Create a planar mesh of quadriliterals with nxm points.
# (SetOrigin and SetPointX only required if the extent
# of the plane should be the same. For the mapping
# of the scalar values, this is not required.)
plane = vtk.vtkPlaneSource()
plane.SetResolution(n-1,m-1)
plane.SetOrigin([xmin,ymin,0]) # Lower left corner
plane.SetPoint1([xmax,ymin,0])
plane.SetPoint2([xmin,ymax,0])
plane.Update()
# Map the values to the planar mesh.
# Assumption: same index i for scalars z[i] and mesh points
nPoints = plane.GetOutput().GetNumberOfPoints()
assert(nPoints == len(z))
# VTK has its own array format. Convert the input
# array (z) to a vtkFloatArray.
scalars = vtk.vtkFloatArray()
scalars.SetNumberOfValues(nPoints)
for i in range(nPoints):
scalars.SetValue(i, z[i])
# Assign the scalar array.
plane.GetOutput().GetPointData().SetScalars(scalars)
###########################################################
# WRITE DATA
###########################################################
writer = vtk.vtkXMLPolyDataWriter()
writer.SetFileName('output.vtp')
writer.SetInputConnection(plane.GetOutputPort())
writer.Write() # => Use for example ParaView to see scalars
###########################################################
# VISUALIZATION
###########################################################
# This is a bit annoying: ensure a proper color-lookup.
colorSeries = vtk.vtkColorSeries()
colorSeries.SetColorScheme(vtk.vtkColorSeries.BREWER_DIVERGING_SPECTRAL_10)
lut = vtk.vtkColorTransferFunction()
lut.SetColorSpaceToHSV()
nColors = colorSeries.GetNumberOfColors()
zMin = np.min(z)
zMax = np.max(z)
for i in range(0, nColors):
color = colorSeries.GetColor(i)
color = [c/255.0 for c in color]
t = zMin + float(zMax - zMin)/(nColors - 1) * i
lut.AddRGBPoint(t, color[0], color[1], color[2])
# Mapper.
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(plane.GetOutputPort())
mapper.ScalarVisibilityOn()
mapper.SetScalarModeToUsePointData()
mapper.SetLookupTable(lut)
mapper.SetColorModeToMapScalars()
# Actor.
actor = vtk.vtkActor()
actor.SetMapper(mapper)
# Renderer.
renderer = vtk.vtkRenderer()
renderer.SetBackground([0.5]*3)
# Render window and interactor.
renderWindow = vtk.vtkRenderWindow()
renderWindow.SetWindowName('Demo')
renderWindow.AddRenderer(renderer)
renderer.AddActor(actor)
interactor = vtk.vtkRenderWindowInteractor()
interactor.SetRenderWindow(renderWindow)
renderWindow.Render()
interactor.Start()
The result will look similar to this:

working with matrices and samples using PROC IML

I am trying to draw random samples from some distribution as follows:
my code runs but the numbers look strange. so I am not sure what went wrong, maybe some operators. The elements are extremely large.
my attempt:
C_hat=(((x`)*x)**(-1))*((x`)*z);
S=((z-x*c_hat)`)*((z-x*c_hat));
*draw sigma;
sigma = shape(RandWishart(1, 513 - 3 - 2,s**(-1)),4,4);
*draw vec(c);
vec_c_hat= colvec(c_hat`); *vectorization of c_hat;
call randseed(4321);
vec_c = RandNormal(1,vec_c_hat,(sigma`)#(((x`)*x)**(-1)));
c = shape(vec_c,4,4);
print c;
Since you haven't provided data or a reference, it is difficult to guess whether your "strange" and "extremely large" numbers are correct. However, the program looks mostly correct, so check your data.
A minor problem with your program is that you are using the SHAPE function to reshape the vec_c vector into a matrix. You should be using the SHAPECOL function (or transpose the result).
The following program uses the Sashelp.Cars data, which is distributed with SAS, to initialize the X and Z matrices. The program computes a random matrix C which is close to the inverse crossproduct matrix for the data. I've also added some intermediate computations and comments. This version works as expected on the Sashelp.Cars data:
proc iml;
use sashelp.cars;
read all var {weight wheelbase enginesize horsepower} into X;
read all var {mpg_city mpg_highway} into Z;
close;
*normal equations and covariance;
xpx = x`*x;
invxpx = inv(xpx);
C_hat = invxpx*(x`*z);
r = z-x*c_hat;
S = r`*r;
*draw sigma;
call randseed(4321);
DF = nrow(X)-ncol(X)-2;
W = RandWishart(1, DF, inv(S)); /* 1 x (p*p) row vector */
sigma = shape(W, sqrt(ncol(W))); /* reshape to p x p matrix */
*draw vec(c);
vec_c_hat = colvec(c_hat`); /* stack columns of c_hat */
vec_c = RandNormal(1, vec_c_hat, sigma#invxpx);
c = shapecol(vec_c, nrow(C_hat), ncol(C_hat)); /* reshape w/ SHAPECOL */
print C_hat, c;

Find a longitude given a pair of (lat,long) and an offset latitude

In a geodetic coordinate system (wgs84), i have a pair of (latitude,longitude) say (45,50) and (60,20). Also i am said that a new pair of latitude,longitude lies along the line joining these two and at an offset of say 0.1 deg lat from (45,50) i.e. (45.1, x). How do i find this new point? What i tried was to apply the straight line equation
y = mx+c
m = (lat1 - lat2)/ long1-long2)
c = lat1 - m * long1
but that seemed to give wrong results.
Your problem is the calculation of m. You have turned it around!
The normal formula is:
a = (y1 - y2) / (x1 - x2)
so in your case it is:
m = (long2 -long1) / (lat1 - lat2)
so you'll get m = -2
And you also turned the calculation of c around.
Normal is:
b = y1 - a * x1
so you should do:
c = long1 - m * lat1
So you'll get c = 140.
The formula is:
long = -2 * lat + 140
Another way to think about it is given below. The result is the same, of cause.
The surface-line between two coordinates is not a straight line. It is a line drawn on the surface of a round object, i.e. earth. It will be a circle around the earth.
However all coordinates on that line will still go through a straight line.
That is because the coordinate represents the angles of a vector from center of earth to the point you are looking at. The two angles are compared to Equator (latitude) and compared to Greenwich (longitude).
So you need to setup a formula describing all coordinates for that line.
In your case the latitude goes from 45 to 60, i.e. increases by 15.
Your longitude goes from 50 to 20, i.e. decreses by 30.
So your formula will be:
(lat(t), long(t)) = (45, 50) + (15*t, -30*t) for t in [0:1]
Now you can calculate the value of t that will hit (45.1, x) and afterwards you can calculate x.
The equations you use describe a straight line in an 2D cartesian coordinate system.
Longitude and latitude describe a point in a spherical coordinate system.
A spherical coordinate system is not cartesian.
A similar question was answered here.

Reprojecting coordinates using GDAL library

I'm trying to reproject a lat/lon coordinate in the WGS84 system to an UTM one in the SIRGAS 2000 coord system.
Using GDAL, I started with changing a known utm coordinate to the lat/lon counterpart in the same coordinate system (29 N), just to check that I wrote the right code (I'm omitting error-checking here):
OGRSpatialReference monUtm;
monUtm.SetWellKnownGeogCS("WGS84");
monUtm.SetUTM(29, true);
OGRSpatialReference monGeo;
monGeo.SetWellKnownGeogCS("WGS84");
OGRCoordinateTransformation* coordTrans = OGRCreateCoordinateTransformation(&monUtm, &monGeo);
double x = 621921.3413490148;
double y = 4794536.070196861;
int reprojected = coordTrans->Transform(1, &x, &y);
// If OK, print the coords.
delete coordTrans;
coordTrans = OGRCreateCoordinateTransformation(&monGeo, &monUtm);
reprojected = coordTrans->Transform(1, &x, &y);
// If OK, Print the coords.
delete coordTrans;
The coordinates 621921.3413490148, 4794536.070196861 correspond to the Moncelos region in northern Galicia. The forth-and-back transformation seems to work right: the lat/lon coordiantes are the correct ones and when projecting back to UTM, I get the same as the originals:
UTM: 621921.34135 , 4794536.0702
Lat/lon: 43.293779579 , -7.4970160261
Back to UTM: 621921.34135 , 4794536.0702
Now, reprojecting from WGS84 lat/long to SIRGAS 2000 UTM:
// Rodovia dos Tamoios, Brazil:
// - UTM -> 23 S
// - WGS 84 -> EPSG:32723
// - SIRGAS 2000 -> EPSG:31983
OGRSpatialReference wgs;
wgs.SetWellKnownGeogCS("WGS84");
OGRSpatialReference sirgas;
sirgas.importFromEPSG(31983);
coordTrans = OGRCreateCoordinateTransformation(&wgs, &sirgas);
double x = -23.57014667;
double y = -45.49159617;
reprojected = coordTrans->Transform(1, &x, &y);
// If OK, print results
delete coordTrans;
coordTrans = OGRCreateCoordinateTransformation(&sirgas, &wgs);
reprojected = coordTrans->Transform(1, &x, &y);
// If OK, print results.
this doesn't give the same results:
WGS84 Lat/Lon input: -23.57014667 , -45.49159617
SIRGAS 2000 UTM output: 2173024.0216 , 4734004.2131
Back to WGS84 Lat/Lon: -23.570633824 , -45.491627598
As you can see, the original WGS84 lat/lon and the back-to_WGS84 lat/lon coords aren't exactly the same, unlike the first test case. Also, the UTM x-coord has 7 digits (I thought it was limited to 6 (?) ).
In Google Maps, we can see that there's a difference of 27 meters between the two points (original point is represented by a circle. My "back-reprojected" point is represented by a dagger).
Finally, the question: am I doing the reprojection right? If so, why is there a 27 meter difference between reprojections in the second test case?
The problem is that you need to swap your axis order to use Cartesian X/Y space or Lon/Lat, and not "Lat/Lon" order.
Setting this should work.
double x = -45.49159617; // Lon
double y = -23.57014667; // Lat
The difference you saw from your round-trip conversion was from projecting outside the bounds for the UTM zone due to a swapped axis order.

how to Convert Lat ,long to an XY coordinate system (e.g. UTM) and then map this to pixel space of my Image

I have a very small area map , which I downloaded from Openstreet map(PNG) and also its OSM(.osm) file which contains its Lat ,long.
Now I want to convert Lat ,long to an XY coordinate system (e.g. UTM) and then map this to pixel space of my Image which is of size (600 x 800 ). I know its a two way process ,like to know how to do this . Thank you
GPS Coordinates to Pixels
Assuming this map does not cross prime meridian
Assuming pixel 0,0 is upper left, and pixel 600,800 is lower right.
Assuming map is Northern Hemisphere Only (no part of map is southern hemisphere)
Determine the left-most longitude in your 800x600 image (X)
Determine the east-most longitude in your 800x600 image (Y)
Determine Longitude-Diff (Z = Y - X)
Determine north-most latitude in your 800x600 image (A)
Determine south-most latitude in your 800x600 image (B)
Determine Longitude-Diff (C = A - B)
Given a Latitude and Longitude, to determine which pixel they clicked on:
J = Input Longitude
K = Input Latitude
Calculate X-pixel
XPixel = CInt(((Y - J) / CDbl(Z)) * 800)
Calculate Y-pixel
YPixel = CInt(((A - K) / CDbl(C)) * 600)
UTM
Here is a cartographic library that should help with GPS to UTM conversions