I am working with a json file that consists of approximately 17,000 3x1 arrays denoting the coordinates.
Currently,I have an image of 1024x1024 dimensions(which I have flattend),and I am using np.hstack to add the 3x1 array to that image ,this gives me a 1d array of dimension 1048579x1
My objective is to create a final array of dimension 1048579x17,000.
Unfortunately list.append and np.append are not working in this case,because it's consuming too much memory.I tried running this on colab pro,but the memory consumption is too high which causes the session to crash
My current code is as follows
image=cv2.imread('image_name.jpg',0)
shape=image.shape
flat_img=image.ravel()
print(flat_img.shape)
#Here data consists of 17,000 entries each of which is a 3x1 list
with open('data.json') as f:
json1_str = f.read()
json1_data=json.loads(json1_str)
local_coordinates=[]
for i in range(len(json1_data)):
local_coord=json1_data[i]['local_coordinate']
local_coord=np.array(local_coord)
new_arr=np.hstack((flat_img,local_coord))
new_arr=new_arr.tolist()
local_coordinates.append(new_arr) #
Is there an optimal way to stack all the 1048579 1d arrays to create the final matrix which can be used for training purposes?
Related
I have a Linear() layer in Pytorch after a few Conv() layers. All the images in my dataset are black and white. However most of the images in my test set are of a different dimension than the images in my training set. Apart from resizing the images themselves, is there any way to define the Linear() layer in such a way that it takes a variable input dimension? For example something similar to view(-1)
Well, it doesn't make sense to have a Linear() layer with a variable input size. Because in fact it's a learnable matrix of shape [n_in, n_out]. And matrix multiplication is not defined for inputs if theirs feature dimension != n_in
What you can do is to apply pooling from functional API. You'll need to specify kernel_size and stride such that resulting output will have feature dimension size = n_in.
I am trying to understand unpooling in Pytorch because I want to build a convolutional auto-encoder.
I have the following code
from torch.autograd import Variable
data = Variable(torch.rand(1, 73, 480))
pool_t = nn.MaxPool2d(2, 2, return_indices=True)
unpool_t = nn.MaxUnpool2d(2, 2)
out, indices1 = pool_t(data)
out = unpool_t(out, indices1)
But I am constantly getting this error on the last line (unpooling).
IndexError: tuple index out of range
Although the data is simulated in this example, the input has to be of that shape because of the preprocessing that has to be done.
I am fairly new to convolutional networks, but I have even tried using a ReLU and convolutional 2D layer before the pooling however, the indices always seem to be incorrect when unpooling for this shape.
Your data is 1D and you are using 2D pooling and unpooling operations.
PyTorch interpret the first two dimensions of tensors as "batch dimension" and "channel"/"feature space" dimension. The rest of the dimensions are treated as spatial dimensions.
So, in your example, data is 3D tensor of size (1, 73, 480) and is interpret by pytorch as a single batch ("batch dimension" = 1) with 73 channels per sample and 480 samples.
For some reason MaxPool2d works for you and treats the channel dimension as a spatial dimension and sample this as well - I'm not sure this is a bug or a feature.
If you do want to sample along the second dimension you can add an additional dimension, making data a 4D tensor:
out, indices1 = pool_t(data[None,...])
In [11]: out = unpool_t(out, indices1, data[None,...].size())
I am doing a batch execution of high number of 3x3 matrices with CUDA.
The goal is to get a big matrix of 3x3 matrix (so I use a 4D array).
I have done previously the same operation with numpy.linalg.inv function. With this way, I can directly get an array of 3x3 matrix : I show you the code that performs this operation.
Now, with CUDA version, I would like to reshape in a minimum of instructions the big 1D array produced : so I have to build a (N,N,3,3) array from a (NN3*3) 1D array.
For the moment, I can do this reshape into 2 steps (here the code below).
The original version with classical numpy.linalg.inv is carried out by:
for r_p in range(N):
for s_p in range(N):
# original version (without GPU)
invCrossMatrix[:,:,r_p,s_p] = np.linalg.inv(arrayFullCross_vec[:,:,r_p,s_p])
invCrossMatrix represents a (3,3,N,N) array and I get it directly from the (3,3,N,N) arrayFullCross array (dimBlocks = 3)
For the moment, when I use GPU batch execution, I start from the 1D array :
# Declaration of inverse cross matrix
invCrossMatrix_temp = np.zeros((N**2,3,3))
# Create arrayFullCross_vec array
arrayFullCross_vec = np.zeros((3,3,N,N))
# Create arrayFullCross_vec array
invCrossMatrix_gpu = np.zeros((3*3*(N**2)))
# Build observables covariance matrix
arrayFullCross_vec = buildObsCovarianceMatrix3_vec(k_ref, mu_ref, ir)
## Performing batch inversion 3x3 :
invCrossMatrix_gpu = gpuinv3x3(arrayFullCross_vec.flatten('F'),N**2)
## First reshape
invCrossMatrix_temp = invCrossMatrix_gpu.reshape(N**2,3,3)
# Second reshape : don't forget ".T" transpose operator
invCrossMatrix = (invCrossMatrix_temp.reshape(N,N,3,3)).T
Question 1) Why the -F option into flatten('F') is necessary?
If I do only gpuinv3x3(arrayFullCross_vec.flatten,N**2), the code doesn't work. Python is maybe column major like Fortran ?
Question 2) Now, I would like to convert the following block:
## First reshape
invCrossMatrix_temp = invCrossMatrix_gpu.reshape(N**2,3,3)
# Second reshape : don't forget ".T" transpose operator
invCrossMatrix = (invCrossMatrix_temp.reshape(N,N,3,3)).T
into a single reshape instruction. Is it possible?
The issue is about to convert the 1D array invCrossMatrix_gpu(N**2 * 3 *3) directly into a (3,3,N,N) array.
I expect to reshape the original 1D array in one only time since I call these routines a lot of times.
Update
Is to right to say that array inVCrossMatrix defined by:
invCrossMatrix = (invCrossMatrix_temp.reshape(N,N,3,3)).T
has dimensions (3,3,N,N).
#hpaulj: Is it equivalent to do this?
invCrossMatrix =(invCrossMatrix_temp.reshape(N,N,3,3)).transpose(2,3,0,1)
If I have a 4-D blob, say of size (40,1024,300,1) and I want to average pool across the second channel and generate an output of size (40,1,300,1), how would I do it? I think the reduction layer collapses the whole blob and generates a blob of size (40) by summing elements in all other axises (after 1) also. Is there any work around for this without re-implementing a new layer?
The only easy workaround I found is as follows. Permute your blob to a shape (40,300,1,1024). Use reduction layer to compute the mean with axis = -1 and operation = MEAN. I think the blob will be of shape (40,300,1). You may need to use reshape to append an extra dimension at the end (check if this is needed) and then permute back to shape (40,1,300,1).
You can find an implementation of a Permute layer here or here. I hope this helps.
I am using, for the first time, the scikit-image package's MCP class to find the minimum cost path between two points over a cost Raster from ArcGIS converted to a NumPy array using the RasterToNumPyArray tool. However, part of the MCP class attributes necessary for this are the start and end indices.
I do not know how to take a set of points (ArcGIS shapefile that has the lat,long coordinates) and convert that to the location index on a NumPy array generated from a raster with spatial data. I know I can assign the raster-cell OID to the point in ArcGIS, but I am not sure how that can transfer over to the index on the NumPy array. Does anyone know if this is possible?
Alright, here is the only solution I could sort out myself:
import arcpy
from arcpy.sa import *
import numpy as np
arcpy.CheckOutExtension("Spatial")
arcpy.env.extent = "MAXOF" # this ensures the extents of all my rasters will be the same, VERY IMPORTANT
extract = ExtractByMask("cost.img", "points.shp") # this step creates a raster that only has two value cells (where my points are) but the extent is that of my cost raster and the rest of the cells are NoData
extract = Con(extract>0,500,0) # changes the value of those two points to 500, a value much higher than the other cells, choose whatever value (positive or negative) that you know won't be in your cost raster
cost = arcpy.RasterToNumPyArray("cost.img")
calc = arcpy.RasterToNumPyArray(extract)
points = np.where(calc==500) #This produces the indices of all the entries in the array that are 500. However, the output is not in (12, 8), (17, 4) format, it looks like: (array([12, 17]), array([8, 4])) so must create the point indices with the next two lines
start = points[0][0], points[1][0]
end = points[0][1], points[1][1]
Now I have the start and end indices for my mcp process. If the directionality of the start and end points is important in your analysis, then you will have to tweak this to be able to identify them in the array, but it wouldn't be too difficult.