How to involve two instances from a different class, python 2 - python-2.7

I'm trying to create a Soccer match program for a couple of my friends. What I currently have a class to instantiate teams and I'm writing the match instantiating class.
What I currently have is:
class Team(object):
def __init__(self, name, games=0, points=0, goals=0, wins=0, loses=0):
self.name = name
self.points = points
self.games = games
self.goals = goals
self.wins = wins
self.loses = loses
def win(self):
self.points += 3
self.wins += 1
self.games += 1
def lose(self):
self.points += 1
self.loses += 1
self.games += 1
def ratio(self):
print "(Wins/Loses/Games)\n ", self.wins, "/", self.loses, "/", self.games
class Match(object):
def __init__(self, teamagoals, teambgoals):
self.teamagoals = teamagoals
self.teambgoals = teambgoals
def playgame(teama, teamb):
#conditional statements which will decide who wins
alpha = Team("George's Team")
beta = Team("Josh's Team")
gamma = Team("Fred's Team")
At this point I run into issues about how to go about doing this. As per my question, I'm trying to - in the Match() class - involve two instances of the team class. For example, I would like to call a function in Match() and specify that team alpha and team gamma play against each other, then when they win or lose modify their instances corresponding point, games, etc, values.
What is the way I can do this? Is it possible through putting all of the Team() instances into a list, then importing that list into the Match() class? Or is there some other, more elegant, way? Please help.

I'm not sure exactly how you intend to select which teams play each other, but say that's done by one function. Then, Match could be implemented as follows:
class Match(object):
def __init__(self, teama, teamb):
self.teama = teama
self.teamb = teamb
def play(self):
a_points = 0
b_points = 0
# code/function call to decide how many points each team gets.
self.teama.points += a_points
self.teamb.points += b_points
if a_points > b_points:
self.teama.wins += 1
else if b_points > a_points:
self.teamb.wins += 1
else:
# code for when the teams draw
I'm not sure how you intend to structure some of the functions, like deciding how many points teams get, but this is a framework that is hopefully useful/gives you ideas on how you could do things.
Following up on your comment, if you would like to be able to pass in specific teams created outside the class, you could do so as follows:
match1 = Match(gamma, alpha). This will create a new match instance, where the teams 'gamma' and 'alpha' will be modified. (teama and teamb in the class will refer to these objects.)

Related

efficient update of objective function in pyomo (for re-solving)

I have a model that I need to solve many times, with different objective function coefficients.
Naturally, I want to spend as little time as possible on updating the model.
My current setup is as follows (simplified):
Abstract model:
def obj_rule(m):
return sum(Dist[m, n] * m.flow[m,n] for (m,n) in m.From * m.To)
m.obj = Objective(rule=obj_rule, sense=minimize)
After solve, I update the concrete model instance mi this way, we new values in Dist:
mi.obj = sum(Dist[m, n] * mi.flow[m,n] for (m,n) in mi.From * mi.To)
However, I see that the update line takes a lot of time - ca. 1/4 of the overall solution time, several seconds for bigger cases.
Is there some faster way of updating the objective function?
(After all, in the usual way of saving an LP model, the objective function coefficients are in a separate vector, so changing them should not affect anything else.)
Do you have a reason to define an Abstract model before creating your Concrete model? If you define your concrete model with the rule you show above, you should be able to just update your data and re-solve the model without a lot of overhead since you do don't redefine the objective object. Here's a simple example where I change the values of the cost parameter and re-solve.
import pyomo.environ as pyo
a = list(range(2)) # set the variables define over
#%% Begin basic model
model = pyo.ConcreteModel()
model.c = pyo.Param(a,initialize={0:5,1:3},mutable=True)
model.x = pyo.Var(a,domain = pyo.Binary)
model.con = pyo.Constraint(expr=model.x[0] + model.x[1] <= 1)
def obj_rule(model):
return(sum(model.x[ind] * model.c[ind] for ind in a))
model.obj = pyo.Objective(rule=obj_rule,sense=pyo.maximize)
#%% Solve the first time
solver = pyo.SolverFactory('glpk')
res=solver.solve(model)
print('x[0]: {} \nx[1]: {}'.format(pyo.value(model.x[0]),
pyo.value(model.x[1])))
# x[0]: 1
# x[1]: 0
#%% Update data and re-solve
model.c.reconstruct({0:0,1:5})
res=solver.solve(model)
print('x[0]: {} \nx[1]: {}'.format(pyo.value(model.x[0]),
pyo.value(model.x[1])))
# x[0]: 0
# x[1]: 1

Replacing nn.Upsample with alternative upsample operation

I have a UNet++(view in private, code for model at the bottom of the article) which I'm trying to reconfigure. I'm getting some artifacts in some images so I'm following this article which suggests doing upsampling then a convolution operation.
I'm replacing the up-sample layers with sequential operation shown below but my model isn't learning. I suspect its to do with how I've configured the channels so I'd like another opinion.
Old up-sample operation:
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
New operations:
class upConv(nn.Module):
"""
Up sampling/ deconv block by factor of 2
"""
def __init__(self, in_ch, out_ch):
super().__init__()
self.upc = nn.Sequential(
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True),
nn.Conv2d(in_ch, out_ch*2, 3, stride=1, padding=1),
nn.BatchNorm2d(out_ch*2),
nn.ReLU(inplace=True)
)
def forward(self, x):
out = self.upc(x)
return out
My question is do these two operations have the same output/function within my model?
Do these two operations have the same output/function within my model?
If out_ch*2 == in_ch, then: yes, they have the same output shape.
If the input x is the output of a BatchNorm+ReLU op, then they could be even more similar.

meaning of setUseAdjustedValues(True) om pyalgotrade

Here is an example of SMA cross strategy, what is the reason we use self.setUseAdjustedValues(True)
and how does it works?
from pyalgotrade import strategy
from pyalgotrade.technical import ma
from pyalgotrade.technical import cross
class SMACrossOver(strategy.BacktestingStrategy):
def __init__(self, feed, instrument, smaPeriod):
strategy.BacktestingStrategy.__init__(self, feed)
self.__instrument = instrument
self.__position = None
# We'll use adjusted close values instead of regular close values.
self.setUseAdjustedValues(True)
self.__prices = feed[instrument].getPriceDataSeries()
self.__sma = ma.SMA(self.__prices, smaPeriod)
def getSMA(self):
return self.__sma
def onEnterCanceled(self, position):
self.__position = None
def onExitOk(self, position):
self.__position = None
def onExitCanceled(self, position):
# If the exit was canceled, re-submit it.
self.__position.exitMarket()
def onBars(self, bars):
# If a position was not opened, check if we should enter a long position.
if self.__position is None:
if cross.cross_above(self.__prices, self.__sma) > 0:
shares = int(self.getBroker().getCash() * 0.9 / bars[self.__instrument].getPrice())
# Enter a buy market order. The order is good till canceled.
self.__position = self.enterLong(self.__instrument, shares, True)
# Check if we have to exit the position.
elif not self.__position.exitActive() and cross.cross_below(self.__prices, self.__sma) > 0:
self.__position.exitMarket()
If you use regular close values, instead of adjusted ones, your strategy may react to price changes that are actually the result of a stock split and not a price change due to regular trading activity.
As I understood and trying to simplify it, suppose a share of price is 100.
-> next day share splits in 1:2 means 2 shares of 50 each. this price change are not due to trading activities, there is not trade involve for lower this price. So setUseAdjustedValues(True) handle this situation.

The queryset's `count` is wrong after `extra`

When I use extra in a certain way on a Django queryset (call it qs), the result of qs.count() is different than len(qs.all()). To reproduce:
Make an empty Django project and app, then add a trivial model:
class Baz(models.Model):
pass
Now make a few objects:
>>> Baz(id=1).save()
>>> Baz(id=2).save()
>>> Baz(id=3).save()
>>> Baz(id=4).save()
Using the extra method to select only some of them produces the expected count:
>>> Baz.objects.extra(where=['id > 2']).count()
2
>>> Baz.objects.extra(where=['-id < -2']).count()
2
But add a select clause to the extra and refer to it in the where clause, and the count is suddenly wrong, even though the result of all() is correct:
>>> Baz.objects.extra(select={'negid': '0 - id'}, where=['"negid" < -2']).all()
[<Baz: Baz object>, <Baz: Baz object>] # As expected
>>> Baz.objects.extra(select={'negid': '0 - id'}, where=['"negid" < -2']).count()
0 # Should be 2
I think the problem has to do with django.db.models.sql.query.BaseQuery.get_count(). It checks whether the BaseQuery's select or aggregate_select attributes have been set; if so, it uses a subquery. But django.db.models.sql.query.BaseQuery.add_extra adds only to the BaseQuery's extra attribute, not select or aggregate_select.
How can I fix the problem? I know I could just use len(qs.all()), but it would be nice to be able to pass the extra'ed queryset to other parts of the code, and those parts may call count() without knowing that it's broken.
Redefining get_count() and monkeypatching appears to fix the problem:
def get_count(self):
"""
Performs a COUNT() query using the current filter constraints.
"""
obj = self.clone()
if len(self.select) > 1 or self.aggregate_select or self.extra:
# If a select clause exists, then the query has already started to
# specify the columns that are to be returned.
# In this case, we need to use a subquery to evaluate the count.
from django.db.models.sql.subqueries import AggregateQuery
subquery = obj
subquery.clear_ordering(True)
subquery.clear_limits()
obj = AggregateQuery(obj.model, obj.connection)
obj.add_subquery(subquery)
obj.add_count_column()
number = obj.get_aggregation()[None]
# Apply offset and limit constraints manually, since using LIMIT/OFFSET
# in SQL (in variants that provide them) doesn't change the COUNT
# output.
number = max(0, number - self.low_mark)
if self.high_mark is not None:
number = min(number, self.high_mark - self.low_mark)
return number
django.db.models.sql.query.BaseQuery.get_count = quuux.get_count
Testing:
>>> Baz.objects.extra(select={'negid': '0 - id'}, where=['"negid" < -2']).count()
2
Updated to work with Django 1.2.1:
def basequery_get_count(self, using):
"""
Performs a COUNT() query using the current filter constraints.
"""
obj = self.clone()
if len(self.select) > 1 or self.aggregate_select or self.extra:
# If a select clause exists, then the query has already started to
# specify the columns that are to be returned.
# In this case, we need to use a subquery to evaluate the count.
from django.db.models.sql.subqueries import AggregateQuery
subquery = obj
subquery.clear_ordering(True)
subquery.clear_limits()
obj = AggregateQuery(obj.model)
obj.add_subquery(subquery, using=using)
obj.add_count_column()
number = obj.get_aggregation(using=using)[None]
# Apply offset and limit constraints manually, since using LIMIT/OFFSET
# in SQL (in variants that provide them) doesn't change the COUNT
# output.
number = max(0, number - self.low_mark)
if self.high_mark is not None:
number = min(number, self.high_mark - self.low_mark)
return number
models.sql.query.Query.get_count = basequery_get_count
I'm not sure if this fix will have other unintended consequences, however.

Using the "extra fields " from django many-to-many relationships with extra fields

Django documents give this example of associating extra data with a M2M relationship. Although that is straight forward, now that I am trying to make use of the extra data in my views it is feeling very clumsy (which typically means "I'm doing it wrong").
For example, using the models defined in the linked document above I can do the following:
# Some people
ringo = Person.objects.create(name="Ringo Starr")
paul = Person.objects.create(name="Paul McCartney")
me = Person.objects.create(name="Me the rock Star")
# Some bands
beatles = Group.objects.create(name="The Beatles")
my_band = Group.objects.create(name="My Imaginary band")
# The Beatles form
m1 = Membership.objects.create(person=ringo, group=beatles,
date_joined=date(1962, 8, 16),
invite_reason= "Needed a new drummer.")
m2 = Membership.objects.create(person=paul, group=beatles,
date_joined=date(1960, 8, 1),
invite_reason= "Wanted to form a band.")
# My Imaginary band forms
m3 = Membership.objects.create(person=me, group=my_band,
date_joined=date(1980, 10, 5),
invite_reason= "Want to be a star.")
m4 = Membership.objects.create(person=paul, group=my_band,
date_joined=date(1980, 10, 5),
invite_reason= "Wanted to form a better band.")
Now if I want to print a simple table that for each person gives the date that they joined each band, at the moment I am doing this:
bands = Group.objects.all().order_by('name')
for person in Person.objects.all():
print person.name,
for band in bands:
print band.name,
try:
m = person.membership_set.get(group=band.pk)
print m.date_joined,
except:
print 'NA',
print ""
Which feels very ugly, especially the "m = person.membership_set.get(group=band.pk)" bit. Am I going about this whole thing wrong?
Now say I wanted to order the people by the date that they joined a particular band (say the beatles) is there any order_by clause I can put on Person.objects.all() that would let me do that?
Any advice would be greatly appreciated.
You should query the Membership model instead:
members = Membership.objects.select_related('person', 'group').all().order_by('date_joined')
for m in members:
print m.band.name, m.person.name, m.date_joined
Using select_related here we avoid the 1 + n queries problem, as it tells the ORM to do the join and selects everything in one single query.