Replacing a single row in an R raster - r-raster

I am looking for the fastest way to assign new values to a whole row of a large raster.
I have a large raster called ras
> ras
class : RasterLayer
dimensions : 71476, 49933, 3569011108 (nrow, ncol, ncell)
resolution : 30, 30 (x, y)
extent : 593235, 2091225, -3314375, -1170095 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=aea +lat_1=-18 +lat_2=-36 +lat_0=0 +lon_0=132 +x_0=0 +y_0=0 +ellps=GRS80 +units=m +no_defs
data source : C:/Users/smithj/AppData/Local/Temp/RtmpiynZ5N/raster/r_tmp_2019-05-04_232648_206436_44436.grd
names : layer
values : 0, 255 (min, max)
And I have a vector of length n=ncol values called newvals. I generate newvals through a function that is not amenable to use with the calc function, but in the example below it is just a randomly generated vector for the purposes of this question.
#create example values
newvals<-sample(0:100,49933, replace=TRUE)
My question is, if I wanted to replace the 7023rd row of ras with newvals is there a faster method than the one below?
#insert newvals into row 7023 of ras
ras[7023,]<-newvals
I have also looked at setValues, but it seems to only set values for a whole raster, not part of it(?). Any help would be appreciated

You do not provide much context. The problem is that you have a large dataset that is file-based, so any change will create a new file (unless you use update).
If you need to do this many times, for example row by row, you can open a new file for writing and write row by row (see writeStart). If it is only one row that needs to be changed, perhaps try your luck with raster::update.

Related

C++ vtknetCDFCFReader reading variables with different dimensions issues

I was trying to read my nc file. There are 3 variables in it, they are:
zonalWind (height, lon, lat)
meridionalWind (height, lon, lat)
verticalVelocity (height_2, lon, lat)
Below is my code reading the arrays:
vtkNetCDFCFReader *reader = vtkNetCDFCFReader::New();
reader->SetFileName(fileName);
reader->SetOutputTypeToStructured();
reader->UpdateMetaData();
reader->Update();
reader->Print(std::cout);
reader->SetVariableArrayStatus("verticalVelocity", 1);
reader->SetVariableArrayStatus("zonalWind", 1);
reader->SetVariableArrayStatus("meridionalWind", 1);
But then I got the following error in termianl skipping the verticalVelocity array because of the dimension problem:
vtkNetCDFCFReader (0x7fb1f1517350): Variable verticalVelocity dimensions (height_2 lat lon) are different than the other variable dimensions (height lat lon). Skipping
Is there any method I can read in all 3 variable data instead of "skipping", and do some processing afterwards?
TIA
No. You should create 2 vtkNetCDFCFReader instances and read variables with the same dimensions for each.
If you want to extract just a portion of the larger grid and use those values on the smaller grid, then attach a vtkExtractGrid filter to one or both of the reader outputs to obtain datasets of the same size. Finally, run a vtkMergeArrays filter on the results to generate a single dataset with all the array values.

How to use var/variance function in armadillo

How should I be using the var() function in armadillo ?
I have a matrix in which rows are variables/features and columns observations/instances.
I which to get the variance of each row so I can determine variables/features with the greatest variance.
Currently I am calling:
auto variances = arma::var(data, 0, 1);
Where data is my matrix.
As far as I can tell at the moment I am getting a matrix ? And the documentation suggests this is correct. I was expecting to get back a single vector with variance scores for each of my matrix rows.
I can loop through my rows and get the variance for each row individually like so:
for (auto i = 0; i < data.n_rows; ++i)
auto rowVariance = arma::var(dataSet.data.row(i));
But I would prefer not to do this.
I would like to get back a single vector containing variance values for each row in my matrix and then use arma::sort_index() on this vector to get a sorted set of indices corresponding to the sorted variances.
Thanks in advance.
Turns out the error was because I was using arma::var variances = arma::var(data, 0, 1) and should have been using arma::Col<T> variances = arma::var(data, 0, 1)due to my data matrix being of type arma::Mat<T> as I'm allowing both float and double point precision only.
The comment above from vagoberto set me on the right track.

Coordinates of a pixel between two images

I am looking for a solution to easily compute the pixel coordinate from two images.
Question: If you take the following code, how could I compute the pixel coordinate that changed from the "QVector difference" ? Is it possible to have an (x,y) coordinate and find on the currentImage which pixel it represents ?
char *previousImage;
char *currentImage;
QVector difference<LONG>;
for(int i = 0 ; i < CurrentImageSize; i++)
{
//Check if pixels are the same (we can also do it with RGB values, this is just for the example)
if(previousImagePixel != currentImagePixel)
{
difference.push_back(currentImage - previousImage);
}
currentImage++;
}
EDIT:
More information about this topic:
The image is in RGB format
The width, the height and the bpp of both images are known
I have a pointer to the bytes representing the image
The main objective here is to clearly know what is the new value of a pixel that changed between the two images and to know which pixel is it (its coordinates)
There is not enough information to answer, but I will try to give you some idea.
You have declared char *previousImage;, which implies to me that you have a pointer to the bytes representing an image. You need more than that to interpret the image.
You need to know the pixel format. You mention RGB, So -- for the time being, let's assume that the image uses 3 bytes for each pixel and the order is RGB
You need to know the width of the image.
Given the above 2, you can calculate the "Row Stride", which is the number of bytes that a row takes up. This is usually the "bytes per pixel" * "image width", but it is typically padded out to be divisible by 4. So 3 bpp and a width of 15, would be 45 bytes + 3 bytes of padding to make the row stride 48.
Given that, if you have an index into the image data, you first integer-divide it against the row stride to get the row (Y coordinate).
The X coordinate is the (index mod the row stride) integer-divided by the bytes per pixel.
From what I understand, you want compute the displacement or motion that occured between two images. E.g. for each pixel I(x, y, t=previous) in previousImage, you want to know where it did go in currentImage, and what is his new coordinate I(x, y, t=current).
If that is the case, then it's called motion estimation and measuring the optical flow. There are many algorithms for that, who rely on more or less complex hypotheses, depending on the objects you observe in the image sequence.
The simpliest hypothesis is that if you follow a moving pixel I(x, y, t) in the scene you observe, its luminance will remain constant over time. In other words, dI(x,y,t) / dt = 0.
Since I(x, y, t) is function of three parameters (space and time) with two unknowns, and there is only one equation, this is an ill defined problem that has no easy solution. Many of the algorithms add an additional hypothesis, so that the problem can be solved with a unique solution.
You can use existing libraries which will do that for you, one of them which is pretty popular is openCV.

Curvature Scale Space corner detection algorithm. Arc Length Parameter?

I'm studying about the CSS algorithm and I don't get the hang of the concept of 'Arc Length Parameter'.
According to the literature, planar curve Gamma(u)=(x(u),y(u)) and they say this u is the arc length parameter and apparently, Gaussian Kernel g is also parameterized by this u here.
Stop me if I got something wrong but, aren't x and y location of the pixel? How is it represented by another parameter?
I had no idea when I first saw it on the literature so, I looked up the code. and apparently, I got puzzled even more.
here is the portion of the code
void getGaussianDerivs(double sigma, int M, vector<double>& gaussian,
vector<double>& dg, vector<double>& d2g) {
int L = (M - 1) / 2;
double sigma_sq = sigma * sigma;
double sigma_quad = sigma_sq*sigma_sq;
dg.resize(M); d2g.resize(M); gaussian.resize(M);
Mat_<double> g = getGaussianKernel(M, sigma, CV_64F);
for (double i = -L; i < L+1.0; i += 1.0) {
int idx = (int)(i+L);
gaussian[idx] = g(idx);
// from http://www.cedar.buffalo.edu/~srihari/CSE555/Normal2.pdf
dg[idx] = (-i/sigma_sq) * g(idx);
d2g[idx] = (-sigma_sq + i*i)/sigma_quad * g(idx);
}
}
so, it seems the code uses simple 1D Gaussian Kernel Aperture size of M and it is trying to compute its 1st and 2nd derivatives. As far as I know, 1D Gaussian kernel has parameter of x which is a horizontal coordinate and sigma which is scale. it seems like that 'arc length parameter u' is equivalent to the variable of x. That doesn't make any sense because later in the code, it directly convolutes the set of x and y on the contour.
what is this u?
PS. since I replied to the fellow who tried to answer my question, I think I should modify my question, so, here we go.
What I'm confusing is, how is this parameter 'u' implemented in codes? I think I understood the full code above -of course, I inserted only a portion of the code- but the problem is, I have no idea about what it would be for the 'improved' version of the algorithm. It says it's using 'affine length parameter' instead of this 'arc length parameter' and I'm not so sure how I implement the concept into the code.
According to the literature, the main difference between arc length parameter and affine length parameter is it's sampling interval and arc length parameter uses 1 for the vertical and horizontal direction and root of 2 for the diagonal direction. It makes sense since the portion of the code above is using for loop to compute 1st and 2nd derivatives of the 1d Gaussian and it directly inserts the value of interval 1 but, how is it gonna be with different interval with different variable? Is it possible that I'm not able to use 'for loop' for it?

Why? specific pixel processing should be process (h, w) not (w, h) or (x, y)

Some sample code about image processing using OpenCV give somethings like this:
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{
if(pointPolygonTest(Point(i,j),myPolygon))
{
// do some processing
}
}
}
In the iteration, why we need to start from height and width? and also why the Point is store (height, width) so that is -> (y,x) ?
Ranges between [0..Height] and [0..Width] are maximum boundaries of your working area.
This code is testing which pixels of whole image are inside the polygon myPolygon.
The word "whole" means you should check all pixels of your image so you should iterate from 0 to height for Y, and iterate from 0 to width for X.
Actually here, the row/column convention is used to iterate over the whole image.
height = Number of Rows
width = Number of Columns
The image is being accessed row wise.The outer loop is iterating over rows of the image and the inner loop is iterating on columns. So basically i is the current row and j is the current column of the image.
The inner loop processes a complete row of the image.