I have already to define my own loss function. It does work. The feedforward may not have problem. But I am not sure whether it is correct because I don't define the backward().
class _Loss(nn.Module):
def __init__(self, size_average=True):
super(_Loss, self).__init__()
self.size_average = size_average
class MyLoss(_Loss):
def forward(self, input, target):
loss = 0
weight = np.zeros((BATCH_SIZE,BATCH_SIZE))
for a in range(BATCH_SIZE):
for b in range(BATCH_SIZE):
weight[a][b] = get_weight(target.data[a][0])
for i in range(BATCH_SIZE):
for j in range(BATCH_SIZE):
a_ij= (input[i]-input[j]-target[i]+target[j])*weight[i,j]
loss += F.relu(a_ij)
return loss
The question I want to ask is that
1) Do I need to define the backward() to loss function?
2) How to define the backward()?
3) Is there are any way to do the index of the data while doing SGD in torch?
You can write a loss function like below.
def mse_loss(input, target):
return ((input - target) ** 2).sum() / input.data.nelement()
You do not need to implement backward function. All the above parameters of the loss functions should be PyTorch variables and the rest is taken care by torch.autograd function.
Related
I am trying to sort items that have sizes described by two numbers like the following
10 x 13
100 x 60
7 x 8
The size is saved as a string. I want them sorted like this (first by first dimension, then by second dimension)
7 x 8
10 x 13
100 x 60
how can this be achieved with Django? It would be nice if we could somehow use
Item.objects.sort
I would advice not to store these as a string, but as two IntegerFields, for example, with:
class Item(models.Model):
width = models.IntegerField()
height = models.IntegerField()
#property
def size(self):
return f'{self.width}x{self.height}'
#size.setter
def size(self, value):
self.width, self.height = map(int, value.split('x'))
Then you can easily sort by Item.objects.order_by('width', 'height') for example. We thus have a property .size that can format the item to a size, and even with a setter that can "parse" the value and put the width and height in the corresponding fields.
you could use sorted for this from python math library. Had a similar problem way back this is what I used and it worked just fine.
import math
l = ['10x13', '100x60','7x8']
sorted(l, key=lambda dim: math.hypot(*map(int, dim.split('x'))))
# ['7x8', '10x13', '100x60']
New to pyomo and python in general and I am trying to implement a simple solution to a binary integer programming problem. However the problem is large but a large percentage of the values of the matrix x are known in advance. I have been trying to figure out how to 'tell' pyomo that some values are known in advance and what they are.
from __future__ import division # converts to float before division
from pyomo.environ import * # Make symbolds used by pyomo known to python
model = AbstractModel() # Declaration of an abstract model, called model
model.users = Set()
model.slots = Set()
model.prices=Param(model.users, model.slots)
model.users_balance=Param(model.users)
model.slot_bounds=Param(model.slots)
model.x = Var(model.users, model.slots, domain=Binary)
# Define the objective function
def obj_expression(model):
return sum(sum(model.prices[i,j] * model.x[i,j] for i in model.users)
for j in model.slots)
model.OBJ = Objective(rule=obj_expression, sense=maximize)
# A user can only be assigned to one slot
def one_slot_rule(model, users):
return sum(model.x[users,n] for n in model.slots) <= 1
model.OneSlotConstraint = Constraint(model.users, rule=one_slot_rule)
# Certain slots have a minimum balance requirement.
def min_balance_rule1(model, slots):
return sum(model.x[n,slots] * model.users_balance[n] for n in
model.users) >= model.slot_bounds[slots]
model.MinBalanceConstraint1 = Constraint(model.slots,
rule=min_balance_rule1)
So I want to be able to benefit from the fact that I know certain values of x[i,j] to be 0. So for example I have a list of extra conditions
x[1,7] = 0
x[3,6] = 0
x[5,8] = 0
How do I include this information in order to benefit from reducing the search space?
Many Thanks.
After the model is constructed you can do the following:
model.x[1,7].fix(0)
model.x[3,6].fix(0)
model.x[5,8].fix(0)
or, assuming that you have a Set, model.Arcs, that contains the following:
model.Arcs = Set(initialize=[(1,7), (3,6), (5,8)])
you can fix x variables in a loop:
for i,j in model.Arcs:
model.x[i,j].fix(0)
I am dealing with a classification problem. The following code only works when I divide my data into 100 classes. If I divide my data into less than 100 classes it gives me error that the index is out of bound.My training labels are 85000 and test labels are 15000 Can some one please tell me that it is giving this error and how to fix it?
def dense_to_one_hot(labels_dense, num_classes):
num_labels = labels_dense.shape[0]
index_offset = numpy.arange(num_labels) * num_classes
labels_one_hot = numpy.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
def extract_labels(labels,num_classes, one_hot=False):
if one_hot :
return dense_to_one_hot(labels,num_classes)
return labels
I'm writing a hash function to create hashes of some given size (e.g. 20 bits).
I have learnt how to write the hashes to files in a binary form (see my related question here), but now I would like to handle these hashes in Python (2.7) using the minimum memory allocation. Right now they are typed as int, so they are allocated 24 bytes each, which is huge for a 20 bits object.
How can I create a custom Python object of arbitrary size (e.g. in my case 3 bytes)?
You could do something like you want by packing the bits for each object into a packed array of bit (or boolean) values. There are a number of existing Python bitarray extension modules available. Implementing a higher level "array of fixed bit width integer values" with one is a relatively straight-forward process.
Here's an example based on one in pypi that's implemented in C for speed. You can also download an unofficial pre-built Windows version of it, created by Christoph Gohlke, from here.
Updated —
Now works in Python 2.7 & 3.x.
from __future__ import print_function
# uses https://pypi.python.org/pypi/bitarray
from bitarray import bitarray as BitArray
try:
from functools import reduce # Python 3.
except:
pass
class PackedIntArray(object):
""" Packed array of unsigned fixed-bit-width integer values. """
def __init__(self, array_size, item_bit_width, initializer=None):
self.array_size = array_size
self.item_bit_width = item_bit_width
self.bitarray = BitArray(array_size * item_bit_width)
if initializer is not None:
try:
iter(initializer)
except TypeError: # not iterable
self.bitarray.setall(initializer) # set all to bool(initializer)
else:
for i in xrange(array_size):
self[i] = initializer[i] # must be same length as array
def __getitem__(self, index):
offset = index * self.item_bit_width
bits = self.bitarray[offset: offset+self.item_bit_width]
return reduce(lambda x, y: (x << 1) | y, bits, 0)
def __setitem__(self, index, value):
bits = BitArray('{:0{}b}'.format(value, self.item_bit_width))
offset = index * self.item_bit_width
self.bitarray[offset: offset+self.item_bit_width] = bits
def __len__(self):
""" Return the number of items stored in the packed array.. """
return self.array_size
def length(self):
""" Return the number of bits stored in the bitarray.. """
return self.bitarray.length()
def __repr__(self):
return('PackedIntArray({}, {}, ('.format(self.array_size,
self.item_bit_width) +
', '.join((str(self[i]) for i in xrange(self.array_size))) +
'))')
if __name__ == '__main__':
from random import randrange
# hash function configuration
BW = 8, 8, 4 # bit widths of each integer
HW = sum(BW) # total hash bit width
def myhash(a, b, c):
return (((((a & (2**BW[0]-1)) << BW[1]) |
b & (2**BW[1]-1)) << BW[2]) |
c & (2**BW[2]-1))
hashes = PackedIntArray(3, HW)
print('hash bit width: {}'.format(HW))
print('length of hashes array: {:,} bits'.format(hashes.length()))
print()
print('populate hashes array:')
for i in range(len(hashes)):
hashed = myhash(*(randrange(2**bit_width) for bit_width in BW))
print(' hashes[{}] <- {:,} (0b{:0{}b})'.format(i, hashed, hashed, HW))
hashes[i] = hashed
print()
print('contents of hashes array:')
for i in range(len(hashes)):
print((' hashes[{}]: {:,} '
'(0b{:0{}b})'.format(i, hashes[i], hashes[i], HW)))
Sample output:
hash bit width: 20
length of hashes array: 60 bits
populate hashes array:
hashes[0] <- 297,035 (0b01001000100001001011)
hashes[1] <- 749,558 (0b10110110111111110110)
hashes[2] <- 690,468 (0b10101000100100100100)
contents of hashes array:
hashes[0]: 297,035 (0b01001000100001001011)
hashes[1]: 749,558 (0b10110110111111110110)
hashes[2]: 690,468 (0b10101000100100100100)
Note: bitarray.bitarray objects also have methods to write and read their bits to and from files. These could be used to also provide similar functionality to the PackedIntArray class above.
How can I define a succession of functions h_k: k=1,2,3,...
by using two known functions f=f(x) and g=g(x) as follows:
h_1=f/g,
h_{k+1}=diff(h_k,x)/g, for k=1,2,3,.....
Note that the new functions have two entries h(k,x)=h_k(x).
I want to do it in Sympy.
If k will always be an explicit integer, just use a Python function:
def h(x, k):
if k == 1:
return f(x)/g(x)
return diff(h(x, k - 1), x)/g(x)
If you want to allow symbolic k (like k = Symbol('k')), subclass sympy.Function
class h(Function):
#classmethod
def eval(cls, x, k):
if k.is_Integer and k.is_positive:
if k == 1:
return f(x)/g(x)
else:
return diff(h(x, k - 1), x)/g(x)
(note that if eval returns None (i.e., it hits the bottom of the function without returning), the function will remain unevaluated.
Note that we check k.is_Integer with a capital I (not k.is_integer). This means that k is an explicit integer, like 1, 2, 3, .... k.is_integer would also be true for Symbol('k', integer=True), but we don't want to evaluate in this case because we don't know which integer it is.