New to pyomo and python in general and I am trying to implement a simple solution to a binary integer programming problem. However the problem is large but a large percentage of the values of the matrix x are known in advance. I have been trying to figure out how to 'tell' pyomo that some values are known in advance and what they are.
from __future__ import division # converts to float before division
from pyomo.environ import * # Make symbolds used by pyomo known to python
model = AbstractModel() # Declaration of an abstract model, called model
model.users = Set()
model.slots = Set()
model.prices=Param(model.users, model.slots)
model.users_balance=Param(model.users)
model.slot_bounds=Param(model.slots)
model.x = Var(model.users, model.slots, domain=Binary)
# Define the objective function
def obj_expression(model):
return sum(sum(model.prices[i,j] * model.x[i,j] for i in model.users)
for j in model.slots)
model.OBJ = Objective(rule=obj_expression, sense=maximize)
# A user can only be assigned to one slot
def one_slot_rule(model, users):
return sum(model.x[users,n] for n in model.slots) <= 1
model.OneSlotConstraint = Constraint(model.users, rule=one_slot_rule)
# Certain slots have a minimum balance requirement.
def min_balance_rule1(model, slots):
return sum(model.x[n,slots] * model.users_balance[n] for n in
model.users) >= model.slot_bounds[slots]
model.MinBalanceConstraint1 = Constraint(model.slots,
rule=min_balance_rule1)
So I want to be able to benefit from the fact that I know certain values of x[i,j] to be 0. So for example I have a list of extra conditions
x[1,7] = 0
x[3,6] = 0
x[5,8] = 0
How do I include this information in order to benefit from reducing the search space?
Many Thanks.
After the model is constructed you can do the following:
model.x[1,7].fix(0)
model.x[3,6].fix(0)
model.x[5,8].fix(0)
or, assuming that you have a Set, model.Arcs, that contains the following:
model.Arcs = Set(initialize=[(1,7), (3,6), (5,8)])
you can fix x variables in a loop:
for i,j in model.Arcs:
model.x[i,j].fix(0)
Related
I am looking to evaluate the sum an infinite geometric series in SymPy, and want to use the fact that I know the sum has to converge. (Similar to this post: How to Sum with conditions on Sympy?)
My code:
import sympy as sp
from sympy import oo
from sympy.assumptions import assuming, Q
from sympy.assumptions.assume import global_assumptions
x,k = sp.symbols('x k')
#global_assumptions.add(Q.is_true(sp.Abs(x)<1))
with assuming(Q.is_true(sp.Abs(x)<1)):
y = sp.Sum(x**k,(k,0,oo)).doit()
print y
The result is:
Piecewise((1/(-x + 1), Abs(x) < 1), (Sum(x**k, (k, 0, oo)), True))
So it seems the assumption that abs(x)<1 is not taken into account.
Using the global_assumptions (commented out here) does not give the desired result.
Concretely, how do I evaluate the sum such that the result would be 1/(1-x)?
At present, the assumptions made by the assumptions module are not used by the rest of SymPy modules, which makes them less useful than one might hope.
You can sort of fake it by using .subs like this:
y = sp.Sum(x**k, (k,0,oo)).doit().subs(sp.Abs(x) < 1, True)
which returns 1/(-x + 1).
I think this is the best one can do at present. Because this is just a literal substitution of True for a condition, rather than a logical inference, it won't work when the assumption doesn't exactly match a condition in Piecewise:
y = sp.Sum(x**k, (k,0,oo)).doit().subs(sp.Abs(x) < 1/2, True) # alas :(
I have already to define my own loss function. It does work. The feedforward may not have problem. But I am not sure whether it is correct because I don't define the backward().
class _Loss(nn.Module):
def __init__(self, size_average=True):
super(_Loss, self).__init__()
self.size_average = size_average
class MyLoss(_Loss):
def forward(self, input, target):
loss = 0
weight = np.zeros((BATCH_SIZE,BATCH_SIZE))
for a in range(BATCH_SIZE):
for b in range(BATCH_SIZE):
weight[a][b] = get_weight(target.data[a][0])
for i in range(BATCH_SIZE):
for j in range(BATCH_SIZE):
a_ij= (input[i]-input[j]-target[i]+target[j])*weight[i,j]
loss += F.relu(a_ij)
return loss
The question I want to ask is that
1) Do I need to define the backward() to loss function?
2) How to define the backward()?
3) Is there are any way to do the index of the data while doing SGD in torch?
You can write a loss function like below.
def mse_loss(input, target):
return ((input - target) ** 2).sum() / input.data.nelement()
You do not need to implement backward function. All the above parameters of the loss functions should be PyTorch variables and the rest is taken care by torch.autograd function.
I'm trying to generate a random integral and assign it to the variable.
import random
import time
Op = lambda: random.randint(1300, 19000)
op = "https://duckduckgo.com/html?q="
variable = int(Op())
grow = 0
while x < 3:
print(Op())
grow = grow + 1
time.sleep(1)
In here everything works fine, function "print" prints different result every time with 3 attempts.
However when I want to format this code like this:
Op = lambda: random.randint(1300, 19000)
op = "https://duckduckgo.com/html?q="
Op1 = int(Op())
pop = str("{}{}").format(op, Op1)
grow = 0
while grow < 3:
print(pop)
grow = grow + 1
time.sleep(1)
Then the function print gives me the same number three times.
For example:
>>>https://duckduckgo.com/html?q=44543
>>>https://duckduckgo.com/html?q=44543
>>>https://duckduckgo.com/html?q=44543
And I would like to get three random numbers. For example:
>>>https://duckduckgo.com/html?q=44325
>>>https://duckduckgo.com/html?q=57323
>>>https://duckduckgo.com/html?q=35691
I was trying to use %s - %d formatting but the result is the same.
Because you never changes the value of 'pop'.
In you first example you are creating instance of Op in every iteration but in second example you created instance once outside the loop and print the same value.
Try this:
Op = lambda: random.randint(1300, 19000)
op = "https://duckduckgo.com/html?q="
grow = 0
while grow < 3:
pop = str("{}{}").format(op, int(Op()))
print(pop)
grow = grow + 1
time.sleep(1)
Lambda functions are by definition anonymous. If you need to "remember" a lambda's procedure, just use def statement. But actually you don't even need this:
import random
import time
url_base = "https://duckduckgo.com/html?q={}"
grow = 0
while grow < 3:
print(url_base.format(random.randint(1300, 19000))
grow = grow + 1
time.sleep(1)
Your main problem is that you are trying to assign fixed values to variables and expect them to behave like procedures.
You need to apply randomness at every iteration. Instead you calculate a random number once and plug it in to every loop.
I am trying to evaluate a certain expression under consideration of assumption. Specifically my problem is related to indexedBase objects.
See the following code:
from sympy import *
init_printing(use_latex="mathjax")
ntot = symbols("n_tot", integer = True)
i = Idx("i",(1,ntot))
k = Idx("k", (1,ntot))
j = Idx("j",(1,ntot))
x = IndexedBase("x")
As an example let's take the derivative of two summations over x[i].
expr = Sum(Sum(x[i],(i,1,ntot)),(k,1,ntot)).diff(x[j])
(NOTE: this is not possible in the current SymPy version 1.0, it is possible with the development version and will be available in future SymPy stable versions.)
I want to evaluate the expression and get a piecewise answer:
print(expr.doit())
OUTPUT: n_tot*Piecewise((1, And(1 <= j, j <= n_tot)), (0, True))
So my problem is, how can I tell sympy that I know for certain that j is between 1 and ntot. So that my result is 1:
I tried the following but with no luck:
with assuming(j==2):
expr=Sum(Sum(x[i],(i,1,ntot)),(k,1,ntot)).diff(x[j]).doit()
Assumptions on inequalities are a sorely missed feature in SymPy.
Technically the Idx object was created to allow a symbol to contain a definition range, so as to put limits on indexed symbols. Your j already has this information:
In [28]: j.upper
Out[28]: n_tot
In [29]: j.lower
Out[29]: 1
Unfortunately, the inequality class is not meant to handle Idx objects, so its range gets disregared.
You could actually try:
In [32]: simplify(expr.doit()).args[0][0]
Out[32]: n_tot
This manually extracts the first term of the Piecewise expression.
Obviously, the current algorithm needs improvement, it should already tell to Sum that j is within the correct range in order to give 1 as a result.
I'm writing a hash function to create hashes of some given size (e.g. 20 bits).
I have learnt how to write the hashes to files in a binary form (see my related question here), but now I would like to handle these hashes in Python (2.7) using the minimum memory allocation. Right now they are typed as int, so they are allocated 24 bytes each, which is huge for a 20 bits object.
How can I create a custom Python object of arbitrary size (e.g. in my case 3 bytes)?
You could do something like you want by packing the bits for each object into a packed array of bit (or boolean) values. There are a number of existing Python bitarray extension modules available. Implementing a higher level "array of fixed bit width integer values" with one is a relatively straight-forward process.
Here's an example based on one in pypi that's implemented in C for speed. You can also download an unofficial pre-built Windows version of it, created by Christoph Gohlke, from here.
Updated —
Now works in Python 2.7 & 3.x.
from __future__ import print_function
# uses https://pypi.python.org/pypi/bitarray
from bitarray import bitarray as BitArray
try:
from functools import reduce # Python 3.
except:
pass
class PackedIntArray(object):
""" Packed array of unsigned fixed-bit-width integer values. """
def __init__(self, array_size, item_bit_width, initializer=None):
self.array_size = array_size
self.item_bit_width = item_bit_width
self.bitarray = BitArray(array_size * item_bit_width)
if initializer is not None:
try:
iter(initializer)
except TypeError: # not iterable
self.bitarray.setall(initializer) # set all to bool(initializer)
else:
for i in xrange(array_size):
self[i] = initializer[i] # must be same length as array
def __getitem__(self, index):
offset = index * self.item_bit_width
bits = self.bitarray[offset: offset+self.item_bit_width]
return reduce(lambda x, y: (x << 1) | y, bits, 0)
def __setitem__(self, index, value):
bits = BitArray('{:0{}b}'.format(value, self.item_bit_width))
offset = index * self.item_bit_width
self.bitarray[offset: offset+self.item_bit_width] = bits
def __len__(self):
""" Return the number of items stored in the packed array.. """
return self.array_size
def length(self):
""" Return the number of bits stored in the bitarray.. """
return self.bitarray.length()
def __repr__(self):
return('PackedIntArray({}, {}, ('.format(self.array_size,
self.item_bit_width) +
', '.join((str(self[i]) for i in xrange(self.array_size))) +
'))')
if __name__ == '__main__':
from random import randrange
# hash function configuration
BW = 8, 8, 4 # bit widths of each integer
HW = sum(BW) # total hash bit width
def myhash(a, b, c):
return (((((a & (2**BW[0]-1)) << BW[1]) |
b & (2**BW[1]-1)) << BW[2]) |
c & (2**BW[2]-1))
hashes = PackedIntArray(3, HW)
print('hash bit width: {}'.format(HW))
print('length of hashes array: {:,} bits'.format(hashes.length()))
print()
print('populate hashes array:')
for i in range(len(hashes)):
hashed = myhash(*(randrange(2**bit_width) for bit_width in BW))
print(' hashes[{}] <- {:,} (0b{:0{}b})'.format(i, hashed, hashed, HW))
hashes[i] = hashed
print()
print('contents of hashes array:')
for i in range(len(hashes)):
print((' hashes[{}]: {:,} '
'(0b{:0{}b})'.format(i, hashes[i], hashes[i], HW)))
Sample output:
hash bit width: 20
length of hashes array: 60 bits
populate hashes array:
hashes[0] <- 297,035 (0b01001000100001001011)
hashes[1] <- 749,558 (0b10110110111111110110)
hashes[2] <- 690,468 (0b10101000100100100100)
contents of hashes array:
hashes[0]: 297,035 (0b01001000100001001011)
hashes[1]: 749,558 (0b10110110111111110110)
hashes[2]: 690,468 (0b10101000100100100100)
Note: bitarray.bitarray objects also have methods to write and read their bits to and from files. These could be used to also provide similar functionality to the PackedIntArray class above.