What is the equivalent python code for this octave code - python-2.7

link to code and files
1.I =imread('one.jpg');
2.I = inresize(I,[20,20]);
3.I=im2double(I);
4.I=mean(I,3);
#This next line
5.a = reshape(I,[],400);
I read an image and resizzed it to 20*20 and then converted it to matrix and then find the grayscale .All this I can do in Python too....but I can't do the 5 th line of code...if I tried ,
reshape (I,1,400)...the image appears rotated...I don't know how to write the 5 the line as above in python
The problem
in the link along with the code theres is a displayData function.I saved the matrix i got using python as mat and loaded it on octave when i called displayData() on the matrix i got a rotated image.thats inclued in the link.And theres no such problem in octave.Thank you for looking into this.

For reshaping an array you can use numpy, and, following your code, you can use reshape. In your case, you are changing the size of I, from (20,20) to (1,400).
A complete example which saves the resulting reshaped array to a mat file, using OpenCV APIs for dealing with images, is:
import numpy as np
import cv2
import scipy.io
I = cv2.imread('one.jpg')
I = cv2.resize(I,(20,20))
I = cv2.normalize(I.astype('float'), None, 0.0, 1.0, cv2.NORM_MINMAX)
I = np.mean(I, axis=2)
a = np.reshape(I, (1,400), order='F')
scipy.io.savemat('a.mat', mdict={'a': a})
Note the second parameter of reshape, which is a tuple containing the new size of the array. Also, notice the third parameter order that allows to rearrange elements in column-major style (Fortran) which is the convention used by octave (see reshape in octave http://www.gnu.org/software/octave/doc/v4.0.1/Rearranging-Matrices.html#XREFreshape). This results in a correct image, non rotated, compared to the one got from octave.
However, given the fact that you want to get from a 2d array a 1d array, you can use, from numpy, ravel if you want to get a view of I (when possible), namely a modification of a changes also I; or flatten, which returns a copy of I, thus modifying a does not change I. However, note that both ravel and flatten returns a 1d array resulting in a size of (400,). The same order parameter should be used.

Related

Optimal way to append to numpy array when dealing with large dimensions

I am working with a json file that consists of approximately 17,000 3x1 arrays denoting the coordinates.
Currently,I have an image of 1024x1024 dimensions(which I have flattend),and I am using np.hstack to add the 3x1 array to that image ,this gives me a 1d array of dimension 1048579x1
My objective is to create a final array of dimension 1048579x17,000.
Unfortunately list.append and np.append are not working in this case,because it's consuming too much memory.I tried running this on colab pro,but the memory consumption is too high which causes the session to crash
My current code is as follows
image=cv2.imread('image_name.jpg',0)
shape=image.shape
flat_img=image.ravel()
print(flat_img.shape)
#Here data consists of 17,000 entries each of which is a 3x1 list
with open('data.json') as f:
json1_str = f.read()
json1_data=json.loads(json1_str)
local_coordinates=[]
for i in range(len(json1_data)):
local_coord=json1_data[i]['local_coordinate']
local_coord=np.array(local_coord)
new_arr=np.hstack((flat_img,local_coord))
new_arr=new_arr.tolist()
local_coordinates.append(new_arr) #
Is there an optimal way to stack all the 1048579 1d arrays to create the final matrix which can be used for training purposes?

Append multiple rows to an openCV matrix [duplicate]

This question already has an answer here:
Add a row to a matrix in OpenCV
(1 answer)
Closed 5 years ago.
Can alyone tell me how to append a couple of rows (as a cv::Mat) at the end of an existing cv::Mat? since it is a lot of data, I don't want to go through the rows with a for-loop and add them one-by-one. So here is what I want to do:
cv::Mat existing; //This is a Matrix, say of size 700x16
cv::Mat appendNew; //This is the new Matrix with additional data, say of size 200x16.
existing.push_back(appendNew);
If I try to push back the smaller matrix, I get an error of non-matching sizes:
OpenCV Error: Sizes of input arguments do not match
(Pushed vector length is not equal to matrix row length)
So I guess .push_back() tries to append the whole matrix like a kind of new channel, which won't work because it is much smaller than the existing matrix. Does someone know if the appending of the rows at the end of the existing matrix is possible as a whole, not going through them with a for-loop?
It seems like an easy question to me, nevertheless I was not able to find a simple solution online... So thanks in advance!
Cheers:)
You can use cv::hconcat() to append rows, either on top or bottom of a given matrix as:
import cv2
import numpy as np
box = np.ones((50, 50, 3), dtype=np.uint8)
box[:] = np.array([0, 0, 255])
sample_row = np.ones((1, 50, 3), dtype=np.uint8)
sample_row[:] = np.array([255, 0, 0])
for i in xrange(5):
box = cv2.vconcat([box, sample_row])
===>
For visualization purposes I have created a RGB matrix with red color and tried to append Blue rows to the bottom, You may replace with original data, Just make sure that both the matrices to be concatenated have same number of columns and same data type. I have explicitly defined the dtype while creating matrices.

How to reshape Tensor in C++ like Caffe's Blob

I want to use tensors of dynamic shapes in C++. For example, I want add a new op in tensorflow, but I do not know the output's shape in the beginning. If I use Caffe, I can firstly reshape the output blob to the maximum size I will use, and reshape to it is actual size in the end.
How can do it with tensorflow's Tensor?.
If you are not sure about the shape of your Variable yet, leave one or more dimension of that tf.Variable as None. For example:
x = tf.placeholder(tf.float32, shape=[None, 1,1])
Tensorflow also has a tf.reshape() function that you can use in the same fashion as caffe. For example:
x2 = tf.reshape(x, [-1, dim]) # -1 means "all"

Python Imaging Processing (PIL) - changing the overall RGB of an image

I am trying to change the RGB for the overall image for a project. Currently I am working with a test file before I apply it to the actual Image. I want to test different values of RGB but would first like to start with the mean of all three. How would I go about doing this? I have other modules installed such as scipy, numpy, matplotlib, etc if those are needed. Thanks
from PIL import Image, ImageFilter
test = Image.open('/Users/MeganRCunninghan/Pictures/4th-of-July-Wallpaper.ppm')
test.show()
test.getrgb()
Assuming your image is stored as a numpy.ndarray (Test this with print type(test))...
Your image will be represented by an NxMx3 array. Basically this means you have a N by M image with a color depth of 3- your RGB values. Taking the mean of those 3 will leave you with an NxMx1 array, where the 1 is now the average intensity. Numpy does this very well:
test = test.mean(2)
The parameter given, 2, specifies the dimension to take the mean along. It could be either 0, 1, or 2, because your image matrix is 3 dimensional. This should return an NxM array. You basically will be left with a gray-scale, (color depth of 1) image. Try to show the value that gets returned! If you get Nx3 or Mx3, you know you have just taken the average along the wrong axis. Note that you can check the dimensions of a numpy array with:
test.shape
Shape will be a tuple describing the dimensions of your image.

Convert location of point on raster layer to location of point in NumPy array

I am using, for the first time, the scikit-image package's MCP class to find the minimum cost path between two points over a cost Raster from ArcGIS converted to a NumPy array using the RasterToNumPyArray tool. However, part of the MCP class attributes necessary for this are the start and end indices.
I do not know how to take a set of points (ArcGIS shapefile that has the lat,long coordinates) and convert that to the location index on a NumPy array generated from a raster with spatial data. I know I can assign the raster-cell OID to the point in ArcGIS, but I am not sure how that can transfer over to the index on the NumPy array. Does anyone know if this is possible?
Alright, here is the only solution I could sort out myself:
import arcpy
from arcpy.sa import *
import numpy as np
arcpy.CheckOutExtension("Spatial")
arcpy.env.extent = "MAXOF" # this ensures the extents of all my rasters will be the same, VERY IMPORTANT
extract = ExtractByMask("cost.img", "points.shp") # this step creates a raster that only has two value cells (where my points are) but the extent is that of my cost raster and the rest of the cells are NoData
extract = Con(extract>0,500,0) # changes the value of those two points to 500, a value much higher than the other cells, choose whatever value (positive or negative) that you know won't be in your cost raster
cost = arcpy.RasterToNumPyArray("cost.img")
calc = arcpy.RasterToNumPyArray(extract)
points = np.where(calc==500) #This produces the indices of all the entries in the array that are 500. However, the output is not in (12, 8), (17, 4) format, it looks like: (array([12, 17]), array([8, 4])) so must create the point indices with the next two lines
start = points[0][0], points[1][0]
end = points[0][1], points[1][1]
Now I have the start and end indices for my mcp process. If the directionality of the start and end points is important in your analysis, then you will have to tweak this to be able to identify them in the array, but it wouldn't be too difficult.