Initialize matrix - python-2.7

Why does the top line of code create a zeroed out matrix but the bottom four lines of code give an error ("list assignment index out of range")?
matrix = [ [ 0 for i in range (6)] for j in range(6)]
matrix = [[]]
for i in range (6):
for j in range (6):
matrix[i][j] = 0

becouse the first line is filling a matrix
the second matrix definition is actually creating an array of size 1 with another array as the 0 element
the moment i=1 it fails
the correct form of the second part should be
matrix = []
for i in range(6):
temp = []
for j in range(6):
temp.append(0)
matrix.append(temp)

Related

Cant input numbers into mtarix. why index goes out of range?

Cant input numbers in matrix.Why I get IndexError: list assignment index out of range?
i, j = 5, 7;
matrix = [[x + y for x in xrange(i)] for y in xrange(j)]
print (matrix)
for w in xrange(i):
print (w)
for h in xrange(j):
tmp = int(input('Enter element of matrix'))
matrix[w][h] = tmp
sums = map( lambda row: sum(row), matrix)
print (matrix)
print (sums)
print ('max:', sums.index(max(sums)))
print ('min:', sums.index(min(sums)))
matrix = [[x + y for x in xrange(i)] for y in xrange(j)]
The above line makes the number of columns = i and rows = j, because it will create a j lists with i variables where each list acts as a row.
for w in xrange(i):
print (w)
for h in xrange(j):
tmp = int(input('Enter element of matrix'))
matrix[w][h] = tmp
and in this loop you are using w as rows which is ranging from 0 to i-1 instead it should be 0 to j-1
similarly h should range from 0 to i-1 not 0 to j-1
so your loop should be like this -
for w in xrange(j): #note this changed from i to j
print (w)
for h in xrange(i): #and this from j to i
tmp = int(input('Enter element of matrix'))
matrix[w][h] = tmp

Python: Forming overlapping matrix of 3*3 using 9*9 matrix

I am trying to create a neighborhood of pixel by using pixel matrix. The pixel matrix is the matrix of pixels in a 1 band image. Now I have to form matrix of 3*3 keeping each element of 9*9 matrix at center and have a neighbor for each element. Thus the element at (0,0) position will have neighboring elements as
[[0 0 0],
[0 2 3],
[0 3 4]]
Same case will happen to all elements in the first and last row and column. Attached image can help understanding better.
So the resultant matrix will have the size of 81*81. It is not necessary to save the small matrix in the form of matrix.
I have tried below,
n = size[0]
z= 3
x=y=0
m =0
while all( [x<0, y<0, x>=n, y>=n]):
continue
else:
for i in range(0, n):
arcpy.AddMessage("Hello" )
for x in range(m,m+3):
temp_matrix = [ [ 0 for i in range(3) ] for j in range(3) ]
for y in range(m,m+3):
temp_matrix[x][y] = arr_Pixels[x][y]
m+=1
y+=1
temp_List.append(temp_matrix)
But I am getting error: list assignment out of index. Also it looks too lengthy and confusing. I understood the error is occurring because, there is no increment in the array temp_matrix length.
Is there any better way to implement the matrix in image? Smaller matrices can be saved into list rather than matrix. Please help me.
Update #2
n = size[0]
new_matrix = []
for i in range(0,n):
for j in range(0,n):
temp_mat = [ [ 0 for k in range(3) ] for l in range(3) ]
for k in range(i-1, i+2):
for l in range(j-1,j+2):
if any([k<0, l<0, k>n-1, l>n-1]):
temp_mat[k][l] = 0
else:
temp_mat[k][l] = arr_Pixels[k][l]
new_matrix.append(temp_mat)
I think one issue is your use of while/else. The code in else only executes after the while condition is true and the while will not repeat again. This question might be helpful.
Thus, once it enters else, it will never check again that x<=n and y<=n, meaning that x and y can increase beyond n, which I assume is the length of arr_Pixels.
One better way to do it would be to create two nested for loops that increment from 0 to n and create the temp neighborhood matrices and add them to the 9x9 matrix. Here is an rough outline for that:
new_matrix = [] //future 9x9 matrix
for i in range(0, n):
for j in range(0, n):
// create a neighborhood matrix going around (i, j)
// add temp matrix to new_matrix
This method would avoid having to check that the indexes you are accessing are less than n because it assures that i and j will always be less than n-3.
I found better way of doing it by padding the whole matrix by zero. Thus it resolves the negative indexing problems.
matrix can be padded as
pixels = np.pad(arr_Pixels, (1,1), mode='constant', constant_values=(0, 0))
It adds rows and columns of zeros along the axes.

Enumeration all possible matrices with constraints

I'm attempting to enumerate all possible matrices of size r by r with a few constraints.
Row and column sums must be in non-ascending order.
Starting from the top left element down the main diagonal, each row and column subset from that entry must be made up of combinations with replacements from 0 to the value in that upper left entry (inclusive).
The row and column sums must all be less than or equal to a predetermined n value.
The main diagonal must be in non-ascending order.
Important note is that I need every combination to be store somewhere, or if written in c++, to be ran through another few functions after finding them
r and n are values that range from 2 to say 100.
I've tried a recursive way to do this, along with an iterative, but keep getting hung up on keeping track column and row sums, along with all the data in a manageable sense.
I have attached my most recent attempt (which is far from completed), but may give you an idea of what is going on.
The function first_section(): builds row zero and column zero correctly, but other than that I don't have anything successful.
I need more than a push to get this going, the logic is a pain in the butt, and is swallowing me whole. I need to have this written in either python or C++.
import numpy as np
from itertools import combinations_with_replacement
global r
global n
r = 4
n = 8
global myarray
myarray = np.zeros((r,r))
global arraysums
arraysums = np.zeros((r,2))
def first_section():
bigData = []
myarray = np.zeros((r,r))
arraysums = np.zeros((r,2))
for i in reversed(range(1,n+1)):
myarray[0,0] = i
stuff = []
stuff = list(combinations_with_replacement(range(i),r-1))
for j in range(len(stuff)):
myarray[0,1:] = list(reversed(stuff[j]))
arraysums[0,0] = sum(myarray[0,:])
for k in range(len(stuff)):
myarray[1:,0] = list(reversed(stuff[k]))
arraysums[0,1] = sum(myarray[:,0])
if arraysums.max() > n:
break
bigData.append(np.hstack((myarray[0,:],myarray[1:,0])))
if printing: print 'myarray \n%s' %(myarray)
return bigData
def one_more_section(bigData,index):
newData = []
for item in bigData:
if printing: print 'item = %s' %(item)
upperbound = int(item[index-1]) # will need to have logic worked out
if printing: print 'upperbound = %s' % (upperbound)
for i in reversed(range(1,upperbound+1)):
myarray[index,index] = i
stuff = []
stuff = list(combinations_with_replacement(range(i),r-1))
for j in range(len(stuff)):
myarray[index,index+1:] = list(reversed(stuff[j]))
arraysums[index,0] = sum(myarray[index,:])
for k in range(len(stuff)):
myarray[index+1:,index] = list(reversed(stuff[k]))
arraysums[index,1] = sum(myarray[:,index])
if arraysums.max() > n:
break
if printing: print 'index = %s' %(index)
newData.append(np.hstack((myarray[index,index:],myarray[index+1:,index])))
if printing: print 'myarray \n%s' %(myarray)
return newData
bigData = first_section()
bigData = one_more_section(bigData,1)
A possible matrix could look like this:
r = 4, n >= 6
|3 2 0 0| = 5
|3 2 0 0| = 5
|0 0 2 1| = 3
|0 0 0 1| = 1
6 4 2 2
Here's a solution in numpy and python 2.7. Note that all the rows and columns are in non-increasing order, because you only specified that they should be combinations with replacement, and not their sortedness (and generating combinations is the simplest with sorted lists).
The code could be optimized somewhat by keeping row and column sums around as arguments instead of recomputing them.
import numpy as np
r = 2 #matrix dimension
maxs = 5 #maximum sum of row/column
def generate(r, maxs):
# We create an extra row and column for the starting "dummy" values.
# Filling in the matrix becomes much simpler when we do not have to treat cells with
# one or two zero indices in special way. Thus, we start iteration from the
# (1, 1) index.
m = np.zeros((r + 1, r + 1), dtype = np.int32)
m[0] = m[:,0] = maxs + 1
def go(n, i, j):
# If we completely filled the matrix, yield a copy of the non-dummy parts.
if (i, j) == (r, r):
yield m[1:, 1:].copy()
return
# We compute the next indices in row major order (the choice is arbitrary).
(i2, j2) = (i + 1, 1) if j == r else (i, j + 1)
# Computing the maximum possible value for the current cell.
max_val = min(
maxs - m[i, 1:].sum(),
maxs - m[1:, j].sum(),
m[i, j-1],
m[i-1, j])
for n2 in xrange(max_val, -1, -1):
m[i, j] = n2
for matrix in go(n2, i2, j2):
yield matrix
return go(maxs, 1, 1) #note that this is a generator object
# testing
for matrix in generate(r, maxs):
print
print matrix
If you'd like to have all the valid permutations in the rows and columns, this code below should work.
def generate(r, maxs):
m = np.zeros((r + 1, r + 1), dtype = np.int32)
rows = [0]*(r+1) # We avoid recomputing row/col sums on each cell.
cols = [0]*(r+1)
rows[0] = cols[0] = m[0, 0] = maxs
def go(i, j):
if (i, j) == (r, r):
yield m[1:, 1:].copy()
return
(i2, j2) = (i + 1, 1) if j == r else (i, j + 1)
max_val = min(rows[i-1] - rows[i], cols[j-1] - cols[j])
if i == j:
max_val = min(max_val, m[i-1, j-1])
if (i, j) != (1, 1):
max_val = min(max_val, m[1, 1])
for n in xrange(max_val, -1, -1):
m[i, j] = n
rows[i] += n
cols[j] += n
for matrix in go(i2, j2):
yield matrix
rows[i] -= n
cols[j] -= n
return go(1, 1)

How to map the indexes of a matrix to a 1-dimensional array (C++)?

I have an 8x8 matrix, like this:
char matrix[8][8];
Also, I have an array of 64 elements, like this:
char array[64];
Then I have drawn the matrix as a table, and filled the cells with numbers, each number being incremented from left to right, top to bottom.
If I have, say, indexes 3 (column) and 4 (row) into the matrix, I know that it corresponds to the element at position 35 in the array, as it can be seen in the table that I've drawn. I believe there is some sort of formula to translate the 2 indexes of the matrix into a single index of the array, but I can't figure out what it is.
Any ideas?
The way most languages store multi-dimensional arrays is by doing a conversion like the following:
If matrix has size, n (rows) by m (columns), and we're using "row-major ordering" (where we count along the rows first) then:
matrix[ i ][ j ] = array[ i*m + j ].
Here i goes from 0 to (n-1) and j from 0 to (m-1).
So it's just like a number system of base 'm'. Note that the size of the last dimension (here the number of rows) doesn't matter.
For a conceptual understanding, think of a (3x5) matrix with 'i' as the row number, and 'j' as the column number. If you start numbering from i,j = (0,0) --> 0. For 'row-major' ordering (like this), the layout looks like:
|-------- 5 ---------|
Row ______________________ _ _
0 |0 1 2 3 4 | |
1 |5 6 7 8 9 | 3
2 |10 11 12 13 14| _|_
|______________________|
Column 0 1 2 3 4
As you move along the row (i.e. increase the column number), you just start counting up, so the Array indices are 0,1,2.... When you get to the second row, you already have 5 entries, so you start with indices 1*5 + 0,1,2.... On the third row, you have 2*5 entries already, thus the indices are 2*5 + 0,1,2....
For higher dimension, this idea generalizes, i.e. for a 3D matrix L by N by M:
matrix[ i ][ j ][ k ] = array[ i*(N*M) + j*M + k ]
and so on.
For a really good explanation, see: http://www.cplusplus.com/doc/tutorial/arrays/; or for some more technical aspects: http://en.wikipedia.org/wiki/Row-major_order
For row-major ordering, I believe the statement matrix[ i ][ j ] = array[ i*n + j ] is wrong.
The offset should be offset = (row * NUMCOLS) + column.
Your statement results to be row * NUMROWS + column, which is wrong.
The links you provided give a correct explanation.
Something like this?
//columns = amount of columns, x = column, y = row
var calculateIndex = function(columns, x, y){
return y * columns + x;
};
The example below converts an index back to x and y coordinates.
//i = index, x = amount of columns, y = amount of rows
var calculateCoordinates = function(index, columns, rows){
//for each row
for(var i=0; i<rows; i++){
//check if the index parameter is in the row
if(index < (columns * i) + columns && index >= columns * i){
//return x, y
return [index - columns * i, i];
}
}
return null;
};

trouble calculating offset index into 3D array

I am writing a CUDA kernel to create a 3x3 covariance matrix for each location in the rows*cols main matrix. So that 3D matrix is rows*cols*9 in size, which i allocated in a single malloc accordingly. I need to access this in a single index value
the 9 values of the 3x3 covariance matrix get their values set according to the appropriate row r and column c from some other 2D arrays.
In other words - I need to calculate the appropriate index to access the 9 elements of the 3x3 covariance matrix, as well as the row and column offset of the 2D matrices that are inputs to the value, as well as the appropriate index for the storage array.
i have tried to simplify it down to the following:
//I am calling this kernel with 1D blocks who are 512 cols x 1row. TILE_WIDTH=512
int bx = blockIdx.x;
int by = blockIdx.y;
int tx = threadIdx.x;
int ty = threadIdx.y;
int r = by + ty;
int c = bx*TILE_WIDTH + tx;
int offset = r*cols+c;
int ndx = r*cols*rows + c*cols;
if((r < rows) && (c < cols)){ //this IF statement is trying to avoid the case where a threadblock went bigger than my original array..not sure if correct
d_cov[ndx + 0] = otherArray[offset];//otherArray just contains a value that I might do some operations on to set each of the ndx0-ndx9 values in d_cov
d_cov[ndx + 1] = otherArray[offset];
d_cov[ndx + 2] = otherArray[offset];
d_cov[ndx + 3] = otherArray[offset];
d_cov[ndx + 4] = otherArray[offset];
d_cov[ndx + 5] = otherArray[offset];
d_cov[ndx + 6] = otherArray[offset];
d_cov[ndx + 7] = otherArray[offset];
d_cov[ndx + 8] = otherArray[offset];
}
When I check this array with the values calculated on the CPU, which loops over i=rows, j=cols, k = 1..9
The results do not match up.
in other words d_cov[i*rows*cols + j*cols + k] != correctAnswer[i][j][k]
Can anyone give me any tips on how to sovle this problem? Is it an indexing problem, or some other logic error?
Rather than the answer (which I haven't stared hard enough to find), here's the technique I usually use for debugging these sorts of issues. First, set all values in your destination array to NaN. (You can do this via cudaMemset -- set every byte to 0xFF.) Then try uniformly setting every location to the value of the row, then inspect the results. In theory, it should look something like:
0 0 0 ... 0
1 1 1 ... 1
. . . . .
. . . . .
. . . . .
n n n ... n
If you see NaNs, you've failed to write to an element; if you see row elements out of place, something is wrong, and they'll usually be out of place in a suggestive pattern. Do something similar with the column value, and with the plane. Usually, this trick helps me find part of the index calculation is awry, which is most of the battle. Hope that helps.
I might be just stupid, but what is the logic in this line?
int ndx = r*cols*rows + c*cols;
Shouldn't you have
int ndx = offset*9;
If you said that the size of your covariance array was rows*cols*9, then wouldn't offset*9 take you at the same location in the 3D covariance array as where you are in your input array. So then offset*9+0 would be the location (0,0) of the 3x3 covariance matrix of the element at offset, offset*9+1 would be (0,1), offset*9+2 would be (0,2), offset*9+3 would be (1,0) and so on until offset*9+8.