How to read in a raster and incorporate reference data in .aux.xml, .tfw etc? - r-raster

I've got a tif I am trying to read in with the assigned projection/datum/etc. These are tifs exported from ArcMap with .tif.xml and .tfw files containing projection info. Is there a way in R to bring in the assigned coord.ref with the .tif
Read in TIF
r<-'example.tif'
r <- raster(r)
r
Output
class : RasterLayer
dimensions : 199, 695, 138305 (nrow, ncol, ncell)
resolution : 50000, 50000 (x, y)
extent : -17367529, 17382471, -4692230, 5257770 (xmin, xmax,ymin, ymax)
coord. ref. : NA
data source : in memory
names : layer
values : 0.268264, 5.886104 (min, max)
I know the projection information is contained in the the associated files: .aux.xml,.tfw, .tif.xml.
I'm looking for the best means to efficiently assign it to the tif?
The names of the tifs and associated metadata files are the convention set and produced by ArcMap export. The directory is shared as well.

The projection is normally stored directly in the tif (as a geotiff). It seems arcMAP also store it in the .tif.xml but I don't really know why as it doesn't have to. Anyway, here is something you can try:
1) find your projection. Either the proj4string or the EPSG (http://spatialreference.org)
2) assign it to your raster: proj4string(r) <- CRS("+init=EPSG:4326")
This is ok if the projection your assigning is the real projection in which the data are.

Related

Crop a raster by a list of sfc_polygons

I want to crop a land use raster by 16 polygons, which are 2km-buffers around agricultural fields.
My usual way of dealing with this (crop and mask, see code below) did not work, as my polygons are a list of sfc_Polygons.
I first use st_centroid and st_buffer to build buffers around the centroids of the fields.
The buffer-object is a list of 16 sfc_POLYGONS.
The land use object is a raster of 10*10m resolution.
fields$centroid <- st_centroid(fields$geometry)
buffer <- st_buffer(fields$centroid, 2000)
class(buffer) # [1] "sfc_POLYGON" "sfc"
buffer_landuse <- crop(land_use, buffer) # Error in .local(x, y, ...) : Cannot get an Extent object from argument y
buffer_landuse <- mask(buffer_landuse, buffer)
I guess I need to convert the list of sfc_POLYGONS into individual shapefiles to use the crop and mask functions. But I did not find a solutions so far. I would be happy about any help. Thanks a lot!

C++ how convert a mesh object into an fbx file

I am trying to understand the best approach to converting an 3D object defined by a series of vector coordinates into a .fbx file in within a c++ language environment.
Lets use a simple example: say I have a simple wire-frame cube which exists as a series of 12 vectors (a cube has 12 vertices) which consist of a start and end 3D x, y, z co-ordinate e.g
int vec1[2][3] = {
{0, 0, 0},
{1, 0, 0}};
This is in a sense a mesh object although it is not in any standard .MESH file form.
My question is how best would I go about writing a code to convert this into the correct structure to be saved as an .fbx file.
Additionally I have found online much information regarding:
fbx parsers
fbx writers
fbx sdk
However I do not believe these are exactly what I am looking for (please correct me if I am wrong). In my case, I would like in a sense to generate an .fbx file from scratch with no prior file type to begin with or convert from.
Any information on this topic such as a direct solution or even just the correct terminology that I can then use to direct my own more specific research, would be much appreciated.
Kind Regards,
Ichi.

Armadillo porting imagesc to save image bitmap from matrix

I have this matlab code to display image object after do super spectrogram (stft, couple plca...)
t = z2 *stft_options.hop/stft_options.sr;
f = stft_options.sr*[0:size(spec_t,1)-1]/stft_options.N/1000;
max_val = max(max(db(abs(spec_t))));
imagesc(t, f, db(abs(spec_t)),[max_val-60 max_val]);
And get this result:
I was porting to C++ successfully by using Armadillo lib and get the mat results:
mat f,t,spec_t;
The problem is that I don't have any idea for converting bitmap like imagesc in matlab.
I searched and found this answer, but seems it doesn't work in my case because:
I use a double matrix instead of integer matrix, which can't be mark as bitmap color
The imagesc method take 4 parameters, which has the bounds with vectors x and y
The imagesc method also support scale ( I actually don't know how it work)
Does anyone have any suggestion?
Update: Here is the result of save method in Armadillo. It doesn't look like spectrogram image above. Do I miss something?
spec_t.save("spec_t.png", pgm_binary);
Update 2: save spectrogram with db and abs
mat spec_t_mag = db(abs(spec_t)); // where db method: m = 10 * log10(m);
mag_spec_t.save("mag_spec_t.png", pgm_binary);
And the result:
Armadillo is a linear algebra package, AFAIK it does not provide graphics routines. If you use something like opencv for those then it is really simple.
See this link about opencv's imshow(), and this link on how to use it in a program.
Note that opencv (like most other libraries) uses row-major indexing (x,y) and Armadillo uses column-major (row,column) indexing, as explained here.
For scaling, it's safest to convert to unsigned char yourself. In Armadillo that would be something like:
arma::Mat<unsigned char> mat2=255*(mat-mat.min())/(mat.max()-mat.min());
The t and f variables are for setting the axes, they are not part of the bitmap.
For just writing an image you can use Armadillo. Here is a description on how to write portable grey map (PGM) and portable pixel map (PPM) images. PGM export is only possible for 2D matrices, PPM export only for 3D matrices, where the 3rd dimension (size 3) are the channels for red, green and blue.
The reason your matlab figure looks prettier is because it has a colour map: a mapping of every value 0..255 to a vector [R, G, B] specifying the relative intensity of red, green and blue. A photo has an RGB value at every point:
colormap(gray);
x=imread('onion.png');
imagesc(x);
size(x)
That's the 3rd dimension of the image.
Your matrix is a 2d image, so the most natural way to show it is as grey levels (as happened for your spectrum).
x=mean(x,3);
imagesc(x);
This means that the R, G and B intensities jointly increase with the values in mat. You can put a colour map of different R,G,B combinations in a variable and use that instead, i.e. y=colormap('hot');colormap(y);. The variable y shows the R,G,B combinations for the (rescaled) image values.
It's also possible to make your own colour map (in matlab you can specify 64 R, G, and B combinations with values between 0 and 1):
z[63:-1:0; 1:2:63 63:-2:0; 0:63]'/63
colormap(z);
Now for increasing image values, red intensities decrease (starting from the maximum level), green intensities quickly increase then decrease, and blue values increase from minuimum to maximum.
Because PPM appears (I don't know the format) not to support colour maps, you need to specify the R,G,B values in a 3D array. For a colour order similar to z you would neet to make a Cube<unsigned char> c(ysize, xsize, 3) and then for every pixel y, x in mat2, do:
c(y,x,0) = 255-mat2(y,x);
c(y,x,1) = 255-abs(255-2*mat2(y,x));
x(y,x,2) = mat2(y,x)
or something very similar.
You may use SigPack, a signal processing library on top of Armadillo. It has spectrogram support and you may save the plot to a lot of different formats (png, ps, eps, tex, pdf, svg, emf, gif). SigPack uses Gnuplot for the plotting.

Matlab griddata equivalent in C++

I am looking for a C++ equivalent to Matlab's griddata function, or any 2D global interpolation method.
I have a C++ code that uses Eigen 3. I will have an Eigen Vector that will contain x,y, and z values, and two Eigen matrices equivalent to those produced by Meshgrid in Matlab. I would like to interpolate the z values from the Vectors onto the grid points defined by the Meshgrid equivalents (which will extend past the outside of the original points a bit, so minor extrapolation is required).
I'm not too bothered by accuracy--it doesn't need to be perfect. However, I cannot accept NaN as a solution--the interpolation must be computed everywhere on the mesh regardless of data gaps. In other words, staying inside the convex hull is not an option.
I would prefer not to write an interpolation from scratch, but if someone wants to point me to pretty good (and explicit) recipe I'll give it a shot. It's not the most hateful thing to write (at least in an algorithmic sense), but I don't want to reinvent the wheel.
Effectively what I have is scattered terrain locations, and I wish to define a rectilinear mesh that nominally follows some distance beneath the topography for use later. Once I have the node points, I will be good.
My research so far:
The question asked here: MATLAB functions in C++ produced a close answer, but unfortunately the suggestion was not free (SciMath).
I have tried understanding the interpolation function used in Generic Mapping Tools, and was rewarded with a headache.
I briefly looked into the Grid Algorithms library (GrAL). If anyone has commentary I would appreciate it.
Eigen has an unsupported interpolation package, but it seems to just be for curves (not surfaces).
Edit: VTK has a matplotlib functionality. Presumably there must be an interpolation used somewhere in that for display purposes. Does anyone know if that's accessible and usable?
Thank you.
This is probably a little late, but hopefully it helps someone.
Method 1.) Octave: If you're coming from Matlab, one way is to embed the gnu Matlab clone Octave directly into the c++ program. I don't have much experience with it, but you can call the octave library functions directly from a cpp file.
See here, for instance. http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs
griddata is included in octave's geometry package.
Method 2.) PCL: They way I do it is to use the point cloud library (http://www.pointclouds.org) and VoxelGrid. You can set x, and y bin sizes as you please, then set a really large z bin size, which gets you one z value for each x,y bin. The catch is that x,y, and z values are the centroid for the points averaged into the bin, not the bin centers (which is also why it works for this). So you need to massage the x,y values when you're done:
Ex:
//read in a list of comma separated values (x,y,z)
FILE * fp;
fp = fopen("points.xyz","r");
//store them in PCL's point cloud format
pcl::PointCloud<pcl::PointXYZ>::Ptr basic_cloud_ptr (new pcl::PointCloud<pcl::PointXYZ>);
int numpts=0;
double x,y,z;
while(fscanf(fp, "%lg, %lg, %lg", &x, &y, &z)!=EOF)
{
pcl::PointXYZ basic_point;
basic_point.x = x; basic_point.y = y; basic_point.z = z;
basic_cloud_ptr->points.push_back(basic_point);
}
fclose(fp);
basic_cloud_ptr->width = (int) basic_cloud_ptr->points.size ();
basic_cloud_ptr->height = 1;
// create object for result
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZ>());
// create filtering object and process
pcl::VoxelGrid<pcl::PointXYZ> sor;
sor.setInputCloud (basic_cloud_ptr);
//set the bin sizes here. (dx,dy,dz). for 2d results, make one of the bins larger
//than the data set span in that axis
sor.setLeafSize (0.1, 0.1, 1000);
sor.filter (*cloud_filtered);
So that cloud_filtered is now a point cloud that contains one point for each bin. Then I just make a 2-d matrix and go through the point cloud assigning points to their x,y bins if I want an image, etc. as would be produced by griddata. It works pretty well, and it's much faster than matlab's griddata for large datasets.

transformation of sensor data

I want to transform this data (I was told to do it from the object perspective). A list of the data is:
[0, -20.790001, -4.49] make up the acceleration xyz coordinates - accel(x,y,z).
[-0.762739, -3.364226, -8.962189] make up angle xyz coordinates - angle(x,y,z).
I am trying to use Rodrigues’ rotation formula or linear transformation matrix for rotation? Is this different with sensor data?
I am able to read the data from .csv, but am unsure how to transform into C++ and how to create a matrix in C++.
As long as you have a formula for transformation of the data, you just need to apply it. As for the matrix and creating one, there are multiple ways, either by using a double array:
float matrix[][] ( or matrix** if you want to use pointers )
or using a class (or struct, up to you) which contains the rows and columns
class Matrix
float rows[]
float columns[]
Good luck!
Note: just pseudo code definitely won't work out of the box, obviously