C++ methods overload in python - c++

suppose a C++ class has several constructors which are overloaded according the number and type and sequences of their respective parameters, for example, constructor(int x, int y) and constructor(float x, float y, float z), I think these two are overloaded methods, which one to use depends on the parameters, right? So then in python, how could I create a constructor that can work like this? I notice that python has the def method(self, *args, **kwargs):, so can I use it like: def __init__(self, *args), then I check the length of *args, like if len(args) == 2:, then construct according to the 2-parameters constructor, if len(args) == 3, then use the 3-parameters constructor, etc. So, does that work? Or is there any better way to do it with python? Or should I think in other ways that could take the advantage of python feature itself? thanks~

Usually, you're fine with any combination of
slightly altered design
default arguments (def __init__(self, x = 0.0, y = 0.0, z = 0.0))
use of polymorphism (in a duck-typed language, you don't need an overload for SomeThing vs SomeSlightlyDifferentThing if neither inherits from the other one, as long as their interfaces are similar enough).
If that doesn't seem feasible, try harder ;) If it still doesn't seem feasible, look at David's link.

It really depends on what you want to do. The *args/**kwargs method works fairly well, as does the default arguments that delnan suggests.
The main difference between C++ and Python in this case is the what and why of what you are trying to do. If you have a class that needs floats, just try casting the arguments as floats. You can also rely on default arguments to branch your logic:
class Point(object):
def __init__(self, x=0.0, y=0.0, z=None):
# Because None is a singleton,
# it's like Highlander - there can be only one! So use 'is'
# for identity comparison
if z is None:
self.x = int(x)
self.y = int(y)
self.z = None
else:
self.x = float(x)
self.y = float(y)
self.z = float(z)
p1 = Point(3, 5)
p2 = Point(1.0, 3.3, 4.2)
p3 = Point('3', '4', '5')
points = [p1, p2, p3]
for p in points:
print p.x, p.y, p.z
You don't, of course, have to assign self.z = None, that was simply for the convenience of my example.
For the best advice about which pattern to use,
In [17]: import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
...
If your pattern is beautiful, explicit, and simple, it just may be the right one to use.

I think these two are overloaded methods, which one to use depends on the parameters, right?
Sorry, if I seem to be nitpicking, but just thought of bringing this difference out clearly.
The term parameters and arguments have very specific meaning in C++
argument: an expression in the comma-separated list bounded by the parentheses in a function call expression, a sequence of preprocessing tokens in the comma-separated list bounded by the parentheses in a function-like macro invocation, the operand of throw, or an expression, type-id or template-name in the commaseparated list bounded by the angle brackets in a template instantiation. Also known as an actual argument or actual parameter.
parameter: an object or reference declared as part of a function declaration or definition, or in the catch clause of an exception handler, that acquires a value on entry to the function or handler; an identifier from the commaseparated list bounded by the parentheses immediately following the macro name in a function-like macro definition; or a template-parameter. Parameters are also known as formal arguments or formal parameters

This article talks about how a multimethod decorator can be created in Python. I haven't tried out the code that they give, but the syntax that it defines looks quite nice. Here's an example from the article:
from mm import multimethod
#multimethod(int, int)
def foo(a, b):
...code for two ints...
#multimethod(float, float):
def foo(a, b):
...code for two floats...
#multimethod(str, str):
def foo(a, b):
...code for two strings...

Related

Pass capitalised variables to django settings.configure() [duplicate]

Does python have the ability to create dynamic keywords?
For example:
qset.filter(min_price__usd__range=(min_price, max_price))
I want to be able to change the usd part based on a selected currency.
Yes, It does. Use **kwargs in a function definition.
Example:
def f(**kwargs):
print kwargs.keys()
f(a=2, b="b") # -> ['a', 'b']
f(**{'d'+'e': 1}) # -> ['de']
But why do you need that?
If I understand what you're asking correctly,
qset.filter(**{
'min_price_' + selected_currency + '_range' :
(min_price, max_price)})
does what you need.
You can easily do this by declaring your function like this:
def filter(**kwargs):
your function will now be passed a dictionary called kwargs that contains the keywords and values passed to your function. Note that, syntactically, the word kwargs is meaningless; the ** is what causes the dynamic keyword behavior.
You can also do the reverse. If you are calling a function, and you have a dictionary that corresponds to the arguments, you can do
someFunction(**theDictionary)
There is also the lesser used *foo variant, which causes you to receive an array of arguments. This is similar to normal C vararg arrays.
Yes, sort of.
In your filter method you can declare a wildcard variable that collects all the unknown keyword arguments. Your method might look like this:
def filter(self, **kwargs):
for key,value in kwargs:
if key.startswith('min_price__') and key.endswith('__range'):
currency = key.replace('min_price__', '').replace('__range','')
rate = self.current_conversion_rates[currency]
self.setCurrencyRange(value[0]*rate, value[1]*rate)

What is the idiomatic (and fast) way of treating the empty list/Seq as failure in a short-circuiting operation?

I have a situation where I am using functions to model rule applications, with each function returning the actions it would take when applied, or, if the rule cannot be applied, the empty list. I have a number of rules that I would like to try in sequence and short-circuit. In other languages I am used to, I would treat the empty sequence as false/None and chain them with orElse, like this:
def ruleOne(): Seq[Action] = ???
def ruleTwo(): Seq[Action] = ???
def ruleThree(): Seq[Action] = ???
def applyRules(): Seq[Action] = ruleOne().orElse(ruleTwo).orElse(ruleThree)
However, as I understand the situation, this will not work and will, in fact, do something other than what I expect.
I could use return which feels bad to me, or, even worse, nested if statements. if let would have been great here, but AFAICT Scala does not have that.
What is the idiomatic approach here?
You have different approaches here.
One of them is combining all the actions inside a Seq (so creating a Seq[Seq[Action]]) and then using find (it will return the first element that matches a given condition). So, for instance:
Seq(ruleOne, ruleTwo, ruleThree).find(_.nonEmpty).getOrElse(Seq.empty[Action])
I do not know clearly your domain application, but the last getOrElse allows to convert the Option produced by the find method in a Seq. This method though eval all the sequences (no short circuit).
Another approach consists in enriching Seq with a method that simulated your idea of orElse using pimp my library/extensions method:
implicit class RichSeq[T](left: Seq[T]) {
def or(right: => Seq[T]): Seq[T] = if(left.isEmpty) { right } else { left }
}
The by name parameter enable short circuit evaluation. Indeed, the right sequence is computed only if the left sequence is empty.
Scala 3 has a better syntax to this kind of abstraction:
extension[T](left: Seq[T]){
def or(rigth: => Seq[T]): Seq[T] = if(left.nonEmpty) { left } else { rigth }
}
In this way, you can call:
ruleOne or ruleTwo or ruleThree
Scastie for scala 2
Scastie for scala 3

How to get a value from multiple functions in Pyomo

Let's suppose that the objective function is
max z(x,y) = f1(x) - f2(y)
where f1 is function of variables x and f2 is functions of variables y.
This could be written in Pyomo as
def z(model):
return f1(model) - f2(model)
def f1(model):
return [some summation of x variables with some coefficients]
def f2(model):
return [some summation of y variables with some coefficients]
model.objective = Objective(rule=z)
I know it is possible to get the numeric value of z(x,y) easily by calling (since it is the objective function) :
print(model.objective())
but is there a way to get the numeric value of any of these sub-functions separetedly after the optimization, even if they are not explicitly defined as objectives?
I'll answer your question in terms of a ConcreteModel, since rules in Pyomo, for the most part, are nothing more than a mechanism to delay building a ConcereteModel. For now, they are also required to define indexed objects, but that will likely change soon.
First, there is nothing stopping you from defining those "rules" as standard functions that take in some argument and return a value. E.g.,
def z(x, y):
return f1(x) - f2(y)
def f1(x):
return x + 1
def f2(x):
return y**2
Now if you call any of these functions with a built-in type (e.g., f(1,5)), you will get a number back. However, if you call them with Pyomo variables (or Pyomo expressions) you will get a Pyomo expression back, which you can assign to an objective or constraint. This works because Pyomo modeling components, such as variables, overload the standard algebraic operators like +, -, *, etc. Here is an example of how you can build an objective with these functions:
import pyomo.environ as aml
m = aml.ConcreteModel()
m.x = aml.Var()
m.y = aml.Var()
m.o = aml.Objective(expr= z(m.x, m.y))
Now if m.x and m.y have a value loaded into them (i.e., the .value attribute is something other than None), then you can call one of the sub-functions with them and evaluate the returned expression (slower)
aml.value(f1(m.x))
aml.value(f2(m.y))
or you can extract the value from them and pass that to the sub-functions (faster)
f1(m.x.value)
f2(m.y.value)
You can also use the Expression object to store sub-expressions that you want to evaluate on the fly or share inside multiple other expression on a model (all of which you can update by changing what expression is stored under the Expression object).

Python 2: someIterator.next() vs. next(someIterator) :Python 3

In Python 2 iterators offer .next(), a callable method:
it = iter(xrange(10))
it.next()
> 0
it.next()
> 1
...
In Python 3 one has to use the built-in function next():
it = iter(range(10))
next(it)
> 0
next(it)
> 1
...
Is this just "syntactic sugar"? Like making it more obvious to use next() by moving it into the built-in functions? Or does any advanced concept hide behind this change?
You are directly asking about PEP 3114
consider the following code:
class test:
def __iter__(self):
return self
def next(self):
return "next"
def __next__(self):
return "__next__"
x = test()
for i,thing in enumerate(x):
print(thing)
if i>4:
break
in python 2 next is printed but in python 3 __next__ is printed, since the method is being called implicitly it makes way more sense to match other implicit methods such as __add__ or __getitem__, which is described in the PEP.
If you are planning on using next explicitly then much like len, iter, getattr, hash etc. then python provides a built in function to call the implicit function for you. At least... after PEP 3114. 😀
Also, the next built-in allows you pass a default value if you don't want to throw an error if the iterable is finished which is frequently useful:
it = iter([])
x = next(it, "pls don't crash")
which couldn't really be standardized for a .next() method call. As well objects such as a random number generator may define a .next (next number) method without necessarily wanting to be an iterator which could have left ambiguity / confusion.

Django ugettext_lazy, interpolation and ChoiceField

I want a ChoiceField with these choices:
choices = [(1, '1 thing'),
(2, '2 things'),
(3, '3 things'),
...]
and I want to have it translated.
This does not work:
choices = [(i, ungettext_lazy('%s thing', '%s things', i) % i) for i in range(1,4)]
because as soon as the lazy object is interpolated, it becomes a unicode object - since ChoiceField.choices is evaluated at startup, its choices will be in the language active during Django's startup.
I could use ugettext_lazy('%s things' % i), but that would require a translation for each numeral, which is silly. What is the right way to do this?
In the Django documentation, Translation… Working with lazy translation objects, I see a remark which seems to address your concern here.
Using ugettext_lazy() and ungettext_lazy() to mark strings in models and utility functions is a common operation. When you're working with these objects elsewhere in your code, you should ensure that you don't accidentally convert them to strings, because they should be converted as late as possible (so that the correct locale is in effect). This necessitates the use of the helper function described next.
Then they present django.utils.functional.lazy(func, *resultclasses), which is not presently covered by the django.utils.functional module documentation. However, according to the django.utils.functional.py source code, it "Turns any callable into a lazy evaluated callable.… the
function is evaluated on every access."
Modifying their example from Other uses of lazy in delayed translations to incorporate your code, the following code might work for you.
from django.utils import six # Python 3 compatibility
from django.utils.functional import lazy
from django.utils.safestring import mark_safe
choices = [
(i, lazy(
mark_safe(ungettext_lazy('%s thing', '%s things', i) % i),
six.text_type
)# lazy()
for i in range(1,4)
]
Also, the django.utils.functional module documentation does mention a function decorator allow_lazy(func, *resultclasses). This lets you write your own function which takes a lazy string as arguments. "It modifies the function so that if it's called with a lazy translation as the first argument, the function evaluation is delayed until it needs to be converted to a string." lazy(func, *resultclasses) is not a decorator, it modifies a callable.
N.B. I haven't tried this code in Django. I'm just passing along what I found in the documentation. Hopefully it will point you to something you can use.
For those who encounter this question. Unfortunately, #Jim DeLaHunt's answer doesn't completely work - it's almost there, but not exactly what needs to be done.
The important distinctions are:
What you need to warp with lazy is a function that'd return you a text value, not another lazy translation object, or you'll likely see weird <django.utils.functional.__proxy__ at ...> instead of the actual text (IIRC Django won't go deep down the chain of lazy objects). So, use ungettext, not ungettext_lazy.
You want to do string interpolation only when the wrapped function runs. If you write lazy(f("%d" % 42)) the interpolation would happen too early - in this case Python evaluates eagerly. And don't forget about variable scopes - you can't just refer to the iterator from the wrapped function.
Here, I've used a lambda that receives a number argument and does the interpolation. The code inside lambda is only executed when lazy object is evaluated, that is, when the choice is rendered.
So, the working code is:
choices = [
(
(i, lazy(
lambda cnt: ungettext(u"%(count)d thing",
u"%(count)d things", cnt)
% {"count": cnt},
six.text_type
)(i))
)
for i in [1, 2, 3]
]
This will correctly have the same intended effect as
choices = [
(1, _("1 thing")),
(2, _("2 things")),
(3, _("3 things")),
]
But there will be just a single entry for this in translation database, not multiple ones.
This looks like a situation where you could benefit from the trick taught by Ilian Iliev's blog, Django forms ChoiceField with dynamic values….
Iliev shows a very similar initialiser:
my_choice_field = forms.ChoiceField(choices=get_my_choices())
He says, "the trick is that in this case my_choice_field choices are initialized on server (re)start. Or in other words once you run the server the choices are loaded(calculated) and they will not change until next (re)start." Sounds like the same difficulty you are encountering.
His trick is: "fortunately the form`s class has an init method that is called on every form load. Most of the times you skipped it in the form definition but now you will have to use it."
Here is his sample code, blended with your generator expression:
class MyForm(forms.Form):
def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
self.fields['my_choice_field'] = forms.ChoiceField(
choices=(
(i, ungettext_lazy('%s thing', '%s things', i) % i)
for i in range(1,4)
)# choices=
)# __init__
The generator expression is enclosed in parentheses so that it is treated as a generator object, which is assigned to choices.
N.B. I haven't tried this code in Django. I'm just passing along Iliev's idea.