Custom Layers for recursive interation bitween layers - customization

I have a question. I'm trying to make a CNN, the return of model feeds first Dence layer. Can I help?
e.g:
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same',
input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
I Should have the output value feeds layer Dense (512)
Dense (512 + 10, name='d1') output d2 with input here.
Activation()
Dropout(522)
Dense (10, name='d2')

Related

In TensorFlow Lite Micro are Dense/Dropout/Flatten Layers available?

I know the available ops are in all_ops_resolver.cc but there are non for Dropout, Flatten or Dense.
The magic_wand example trains a model using these layers.
def build_cnn(seq_length):
"""Builds a convolutional neural network in Keras."""
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(
8, (4, 3),
padding="same",
activation="relu",
input_shape=(seq_length, 3, 1)), # output_shape=(batch, 128, 3, 8)
tf.keras.layers.MaxPool2D((3, 3)), # (batch, 42, 1, 8)
tf.keras.layers.Dropout(0.1), # (batch, 42, 1, 8)
tf.keras.layers.Conv2D(16, (4, 1), padding="same",
activation="relu"), # (batch, 42, 1, 16)
tf.keras.layers.MaxPool2D((3, 1), padding="same"), # (batch, 14, 1, 16)
tf.keras.layers.Dropout(0.1), # (batch, 14, 1, 16)
tf.keras.layers.Flatten(), # (batch, 224)
tf.keras.layers.Dense(16, activation="relu"), # (batch, 16)
tf.keras.layers.Dropout(0.1), # (batch, 16)
tf.keras.layers.Dense(4, activation="softmax") # (batch, 4)
When loading the model I don't see these layers anywhere. Also searching over the whole codebase did't not bring much clarity.
static tflite::MicroMutableOpResolver<5> micro_op_resolver; // NOLINT
micro_op_resolver.AddConv2D();
micro_op_resolver.AddDepthwiseConv2D();
micro_op_resolver.AddFullyConnected();
micro_op_resolver.AddMaxPool2D();
micro_op_resolver.AddSoftmax();

LSTM classification for 2D dataset

I a dataset with 10000 rows and 4 features (positions of two cars and their velocity) as x_train and also y_train is my labels which is 0 or 1. I want with LSTM classifying my dataset but the accuracy do not go more than 50 percent. For LSTM I tested both input_shape (10000, 4, 1) and (10000, 1, 4) but accuracy remains around 50 percentage. Do you know what should I do to improve the accuracy? and which of (10000, 4, 1) and (10000, 1, 4) are correct format ?
x_train, x_test, y_train, y_test = train_test_split(x, label, test_size=0.25, random_state=42)
x_train = x_train.reshape(-1, 4, 1)
x_test = x_test.reshape(-1, 4, 1)
y_train = y_train.reshape(-1, 1, 1)
y_test = y_test.reshape(-1, 1, 1)
model = Sequential()
model.add(LSTM(256, input_shape=(x_train.shape[1:]), activation='tanh', return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(LSTM(256, activation='tanh'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(128, activation='tanh'))
model.add(BatchNormalization())
model.add(Dense(2, activation='softmax'))
model.compile( loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'] )
history = model.fit(x_train, y_train, validation_split=0.33, epochs=20, batch_size=128,shuffle=True )

Pytorch tensor dimension multiplication

I'm trying to implement the grad-camm algorithm:
https://arxiv.org/pdf/1610.02391.pdf
My arguments are:
activations: Tensor with shape torch.Size([1, 512, 14, 14])
alpha values : Tensor with shape torch.Size([512])
I want to multiply each activation (in dimension index 1 (sized 512)) in each corresponding alpha value: for example if the i'th index out of the 512 in the activation is 4 and the i'th alpha value is 5, then my new i'th activation would be 20.
The shape of the output should be torch.Size([1, 512, 14, 14])
Assuming the desired output is of shape (1, 512, 14, 14).
You can achieve this with torch.einsum:
torch.einsum('nchw,c->nchw', x, y)
Or with a simple dot product, but you will first need to add a couple of additional dimensions on y:
x*y[None, :, None, None]
Here's an example with x.shape = (1, 4, 2, 2) and y = (4,):
>>> x = torch.arange(16).reshape(1, 4, 2, 2)
tensor([[[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15]]]])
>>> y = torch.arange(1, 5)
tensor([1, 2, 3, 4])
>>> x*y[None, :, None, None]
tensor([[[[ 0, 1],
[ 2, 3]],
[[ 8, 10],
[12, 14]],
[[24, 27],
[30, 33]],
[[48, 52],
[56, 60]]]])

Create partial sublists by index, discarding some data [(x,y,z),...] to [(x,y),...]

I got a list1 which has N items in each item it has (x,y,z). For z in list2, I need to create a new list which has items from list1 with only (x,y) this need to be done for list3, list4 too.
list1 = [(1250, 1442, 0), (1280, 1655, 1), (1029, 1680, 2), (624, 1573, 3), (732, 1159, 4), (1530, 1634, 5), (1885, 1628, 6), (2152, 1834, 7), (1252, 2459, 8), (1309, 3023, 9), (1376, 3585, 10), (1571, 2388, 11), (1682, 2952, 12), (1686, 3579, 13), (1184, 1391, 14), (1291, 1382, 15), (1117, 1440, 16), (1361, 1400, 17)]
list2 = [0,1,14,15,16,17]
list3 = [2,3,4,5,6,7,8,11]
list4 = [9,10,12,13]
For example, list5 which is between list1 and list2 looks like
list5 = [(1250, 1442),(1280, 1655),(1184, 1391)......]
Can anyone suggest a fast way to do it? Thank you
Easy enough:
def getXYfromIndex(l, indexes):
"""Returns x,y from bigger list 'l' containing (x,y,z).
Uses only those elements (by index) of 'l' that are in 'indexes'"""
# list comprehension: returns x,y for each index in 'l' that is in 'indexes'
return [(x,y) for x,y,_ in (l[i] for i in indexes)]
list1 = [(1250, 1442, 0), (1280, 1655, 1), (1029, 1680, 2), (624, 1573, 3), (732, 1159, 4),
(1530, 1634, 5), (1885, 1628, 6), (2152, 1834, 7), (1252, 2459, 8),
(1309, 3023, 9), (1376, 3585, 10), (1571, 2388, 11), (1682, 2952, 12),
(1686, 3579, 13), (1184, 1391, 14), (1291, 1382, 15), (1117, 1440, 16),
(1361, 1400, 17)]
list2 = [0,1,14,15,16,17]
list3 = [2,3,4,5,6,7,8,11]
list4 = [9,10,12,13]
print(getXYfromIndex(list1,list2)) # use list5 = getXYfromIndex(list1,list2)
print(getXYfromIndex(list1,list3)) # to work with those (x,y) - I just print them
print(getXYfromIndex(list1,list4))
Output:
[(1250, 1442), (1280, 1655), (1184, 1391), (1291, 1382), (1117, 1440), (1361, 1400)]
[(1029, 1680), (624, 1573), (732, 1159), (1530, 1634), (1885, 1628), (2152, 1834),
(1252, 2459), (1571, 2388)]
[(1309, 3023), (1376, 3585), (1682, 2952), (1686, 3579)]

Two-Dimensional structured array

I am trying to construct a structured array in Python that can be accessed by the names of the columns and rows. Is this possible with the structured array method of numpy?
Example:
My array should have roughly this form:
My_array = A B C
E 1 2 3
F 4 5 6
G 7 8 9
And i want to have the possibility to do the following:
My_array["A"]["E"] = 1
My_array["C"]["F"] = 6
Is it possible to do this in pyhton using structured arrays or is there another type of structure which is more suitable for such a task?
A basic structured array gives you something that can be indexed with one name:
In [276]: dt=np.dtype([('A',int),('B',int),('C',int)])
In [277]: x=np.arange(9).reshape(3,3).view(dtype=dt)
In [278]: x
Out[278]:
array([[(0, 1, 2)],
[(3, 4, 5)],
[(6, 7, 8)]],
dtype=[('A', '<i4'), ('B', '<i4'), ('C', '<i4')])
In [279]: x['B'] # index by field name
Out[279]:
array([[1],
[4],
[7]])
In [280]: x[1] # index by row (array element)
Out[280]:
array([(3, 4, 5)],
dtype=[('A', '<i4'), ('B', '<i4'), ('C', '<i4')])
In [281]: x['B'][1]
Out[281]: array([4])
In [282]: x.shape # could be reshaped to (3,)
Out[282]: (3, 1)
The view approach produced a 2d array, but with just one column. The usual columns are replaced by dtype fields. It's 2d but with a twist. By using view the data buffer is unchanged; the dtype just provides a different way of accessing those 'columns'. dtype fields are, technically, not a dimension. They don't register in either the .shape or .ndim of the array. Also you can't use x[0,'A'].
recarray does the same thing, but adds the option of accessing fields as attributes, e.g. x.B is the same as x['B'].
rows still have to be accessed by index number.
Another way of constructing a structured array is to defined values as a list of tuples.
In [283]: x1 = np.arange(9).reshape(3,3)
In [284]: x2=np.array([tuple(i) for i in x1],dtype=dt)
In [285]: x2
Out[285]:
array([(0, 1, 2), (3, 4, 5), (6, 7, 8)],
dtype=[('A', '<i4'), ('B', '<i4'), ('C', '<i4')])
In [286]: x2.shape
Out[286]: (3,)
ones, zeros, empty also construct basic structured arrays
In [287]: np.ones((3,),dtype=dt)
Out[287]:
array([(1, 1, 1), (1, 1, 1), (1, 1, 1)],
dtype=[('A', '<i4'), ('B', '<i4'), ('C', '<i4')])
I can construct an array that is indexed with 2 field names, by nesting dtypes:
In [294]: dt1=np.dtype([('D',int),('E',int),('F',int)])
In [295]: dt2=np.dtype([('A',dt1),('B',dt1),('C',dt1)])
In [296]: y=np.ones((),dtype=dt2)
In [297]: y
Out[297]:
array(((1, 1, 1), (1, 1, 1), (1, 1, 1)),
dtype=[('A', [('D', '<i4'), ('E', '<i4'), ('F', '<i4')]), ('B', [('D', '<i4'), ('E', '<i4'), ('F', '<i4')]), ('C', [('D', '<i4'), ('E', '<i4'), ('F', '<i4')])])
In [298]: y['A']['F']
Out[298]: array(1)
But frankly this is rather convoluted. I haven't even figured out how to set the elements to arange(9) (without iterating over field names).
Structured arrays are most commonly produced by reading csv files with np.genfromtxt (or loadtxt). The result is a named field for each labeled column, and a numbered 'row' for each line in the file.
With a recarray, you can access columns with dot notation or with specific reference to the column name. For rows, they are accessed by row number. I haven't seen them accessed via a row name, for example:
>>> import numpy as np
>>> a = np.arange(1,10,1).reshape(3,3)
>>> dt = np.dtype([('A','int'),('B','int'),('C','int')])
>>> a.dtype = dt
>>> r = a.view(type=np.recarray)
>>> r
rec.array([[(1, 2, 3)],
[(4, 5, 6)],
[(7, 8, 9)]],
dtype=[('A', '<i4'), ('B', '<i4'), ('C', '<i4')])
>>> r.A
array([[1],
[4],
[7]])
>>> r['A']
array([[1],
[4],
[7]])
>>> r.A[0]
array([1])
>>> a['A'][0]
array([1])
>>> # now for the row
>>> >>> r[0]
rec.array([(1, 2, 3)],
dtype=[('A', '<i4'), ('B', '<i4'), ('C', '<i4')])
>>>
You can specify the dtype and the type at the same time
>>> a = np.ones((3,3))
>>> b = a.view(dtype= [('A','<f8'), ('B','<f8'),('C', '<f8')], type = np.recarray)
>>> b
rec.array([[(1.0, 1.0, 1.0)],
[(1.0, 1.0, 1.0)],
[(1.0, 1.0, 1.0)]],
dtype=[('A', '<f8'), ('B', '<f8'), ('C', '<f8')])
>>> b.A
array([[ 1.],
[ 1.],
[ 1.]])
>>> b.A[0]
array([ 1.])