Is there a good reason to use Derivative instead of diff in the definition (and solution) of an ODE in Sympy? diff seems to do the job just fine:
diff is a "wrapper" method that it is going to instantiate the Derivative class. So, doing this:
from sympy import *
expr = x**2
expr.diff(x)
# out: 2*x
is equivalent to do:
Derivative(expr, x).doit()
# out: 2*x
However, the Derivative class might be useful to delay the evaluation of a derivative. For example:
Derivative(expr, x)
# out: Derivative(x**2, x)
But the same thing can also be achieved with:
expr.diff(x, evaluate=False)
# out: Derivative(x**2, x)
So, to answer your question, in the example you provided there is absolutely no difference in using diff vs Derivative.
If expr.diff(variable) can be evaluated, it will return an instance of Expr (either a symbol, a number, multiplication, addition, power operation, depending on expr). Otherwise, it will return an object of type Derivative.
The Derivative object represents an unevaluated derivative. It will never evaluate, for example:
>>> Derivative(x**2, x)
Derivative(x**2, x)
diff is a function which always tries to evaluate the derivative. If the derivative in question cannot be evaluated, it just returns an unevaluated Derivative object.
>>> diff(x**2, x)
2*x
Since undefined functions are always things whose derivatives won't be evaluated, Derivative and diff are the same.
>>> diff(f(x), x)
Derivative(f(x), x)
>>> Derivative(f(x), x)
Derivative(f(x), x)
There's only a difference between the two in cases where the derivative can be evaluated. For ODEs, this means that it generally doesn't matter, except maybe if you have something like the following that you don't want expanded
>>> diff(x*f(x), x)
x*Derivative(f(x), x) + f(x)
>>> Derivative(x*f(x), x)
Derivative(x*f(x), x)
Related
I just spent an embarrassing amount of time figuring out that if you're passing a parameterized datatype into a higher-order function in SML, it needs to be in brackets (); so, for example:
fun f1 p = f2 p will work when called like this (for example): f1(Datatype(parameter)) but will not work if called like f1 Datatype(parameter). I'm sure there's a very simple reason why, but I'm not quite clear. Is it something like, the datatype and parameter are "seen" as 2 things by the function if not in brackets? Thanks!
It's important to realize how functions work in SML. Functions take a single argument, and return a single value. This is very easy to understand but it's very often practically necessary for a function to take more than one value as input. There are two ways of achieving this:
Tuples
A function can take one value that contains multiple values in the form of a tuple. This is very common in SML. Consider for instance:
fun add (x, y) = x + y
Here (x, y) is a tuple inferred to be composed of two ints.
Currying
A function takes one argument and returns one value. But functions are values in SML, so a function can return a function.
fun add x = fn y => x + y
Or just:
fun add x y = x + y
This is common in OCaml, but less common in SML.
Function Application
Function application in SML takes the form of functionName argument. When a tuple is involved, it looks like: functionName (arg1, arg2). But the space can be elided: functionName(arg1, arg2).
Even when tuples are not involved, we can put parentheses around any value. So calling a function with a single argument can look like: functionName argument, functionName (argument), or functionName(argument).
Your Question
f1(Datatype(parameter))
This parses the way you expect.
f1 Datatype(parameter)
This parses as f1 Datatype parameter, which is a curried function f1 applied to the arguments Datatype and parameter.
My comprehension of the problem comes from Heilperin's et al. "Concrete Abstraction". I got that currying is the translation of the evaluation of a function that takes several arguments into evaluating a sequence of functions, each with a single argument. I have clear the semantic differences between the two approaches (can I call them this way?) but I am sure I did not grasp the practical implications behind the two approaches.
Please consider, in Ocaml:
# let foo x y = x * y;;
foo : int -> int -> int = <fun>
and
# let foo2 (x, y) = x * y;;
foo2 : int * int -> int = <fun>
The results will be the same for the two functions.
But, practically, what does make the two functions different? Readability? Computational efficiency? My lack of experience fails to give to this problem an adequate reading.
First of all, I would like to stress, that due to compiler optimizations the two functions above will be compiled into the same assembly code. Without the optimizations, the cost of currying would be too high, i.e., an application of a curried function would require allocating an amount of closures equal to the number of arguments.
In practice, curried function is useful, to define partial application. For example, cf.,
let double = foo 2
let double2 x = foo2 (2,x)
Another implication is that in a curried form, you do not need to allocate temporary tuples for the arguments, like in the example above, the function double2 will create an unnecessary tuple (2,x) every time it is called.
Finally, the curried form, actually simplifies reasoning about functions, as now, instead of having N families of N-ary functions, we have only unary functions. That allows, to type functions equally, for example, type 'a -> 'b is applicable to any function, e.g., int -> int, int -> int -> int, etc. Without currying, we would be required to add a number arguments into the type of a function, with all negative consequences.
With the first implementation you can define, for example
let double = foo 2
the second implementation can not be partially reused.
And does f(x)+(g(y)) can make sure call g(y) first?
I know the order in expression is undefined in many case, but in this case does parentheses work?
Parentheses exist to override precedence. They have no effect on the order of evaluation.
Look ma, two lines!
auto r = g(y);
f(x) + r;
This introduces the all-important sequence point between the two function calls. There may be other ways to do it, but this way seems straightforward and obvious. Note that your parentheses do not introduce a sequence point, so aren't a solution.
No. Unless the + operator is redefined, things like that are evaluated left to right. Even if you were able to influence the precedence in the operator, it wouldn't necessarily mean that f and g were evaluated in the same order. If you need f to be evaluated before g, you can always do:
auto resultOfF = f(x);
auto resultOfG = g(x);
resultOfF + resultOfG;
Is it possible to optimize a series of "glued together" std::functions and/or is there any implementation that attempts to do this?
What I mean is most easily expressed mathematically: say I want to make a std::function that is a function of a function:
f(x,y,z) = x^2 * y^3 * z^4
g(x,y,z) = f(x,y,z) / (x*y^2)
Is there a way for an STL/compiler implementor to optimize away parts of the arithmetic is calling a function object of g, created from a function object of f?
This would be a kind of symbolic simplification of the functions, but because this is a std::function, it would have to be spotted on a machine level.
Due to this being an optimization, which takes time, and probably isn't free (in clock cycles and/or memory), it probably isn't allowed by the Standard? It leans very close to a language that is typically ran through a VM. (I'm thinking LLVM more than Java here, with runtime optimizations).
EDIT: In order to make the discussion "more useful", here's a short code snippet (I understand a lambda is not a std::function, but a lambda can be stored in a std::function, so assuming auto below means std::function<T> with the appropriate T will express perfectly what I meant above):
auto f = [](const double x, const double y, const double z){ return x*x*y*y*y*z*z*z*z; };
auto g = [](const double c, const double y, const double z){ return f(x,y,z)/(x*y*y); };
A "trivial" compiler would make g equivalent to
double g(const double x, const double y, const double z){ return x*x*y*y*y*z*z*z*z/(x*y*y); }
While an optimized std::function could make it (mathematically and in every other sense correct!):
double g( const double x, const double y, const double z){ return x*y*z*z*z*z; }
Note that although I'm talking about mathematical functions here, similar transformations could be made for functions in the general sense, but that would take more introspection, which means overhead.
I can see this being very important when designing mathematical and physics simulations, where the generality of compositing existing library functions into user-case functions, with all the usual mathematical simplifications could make for a nice method of expressive, yet performant calculation software.
This is why you leave the optimizing to the compiler. They're algebraically equivalent but not equivalent due to FP imprecision. Your two versions of g would yield subtly different answers, which could be very important if called in an inner loop- not to mention the behavioural difference if x, y, z was 0.
Secondly, as the contents of function are unknown until run-time, there's no way the compiler could perform such optimizations as it doesn't have the data it needs.
The compiler is allowed to optimize in specific allowed cases, or if the optimized code behaves "as if" it were the unopotimized code.
In this case not only would x or y being 0 change the results, but if f overflowed, or the data types were floating point or user defined the results could change as a result of such optimization. Thus I suspect in practice you'll never see it happen and would have to (if possible) compose a combined function at compile time (presumably using templates).
A monad is defined as an endofunctor on category C. Let's say, C has type int and bool and other constructed types as objects. Now let's think about the list monad defined over this category.
By it's very definition list then is an endofunctor, it maps (can this be interpreted as a function?) an int type into List[int] and bool to List[bool] of and maps (again a function?) a morphism int -> bool to
List[int] -> List[bool]
So, far, it kind of makes sense. But what throws me into deep confusion is the additional definitions of natural transformations that need to accompany it:
a. Unit...that transforms int into List[int] (doesn't the definition of List functor already imply this? This is one major confusion I have
b. Does the List functor always have to be understood as mapping from int to List[int] not from int to List[bool]?
c. Is the unit natural transformation int to List[int] different from map from int to List[int] implied by defining List as a functor? I guess this is just re-statement of my earlier question.
Unit is a natural transformation from the Identity functor on C to List; in general, a natural transformation a: F => G between two parallel functors F,G : X -> Y consists of
for each object x: X of the domain, a morphism a_x : Fx -> Gx
plus a naturality condition relating the action of F and G on morphisms
you should thought of a natural transformation as above as a way of "going" from F to G. Applying this to your unit for List situation, Unit specifies for each type X a function Unit_X : X -> List[X], and this is just viewing instances of your type as List[X] instances with one element.
I don't understand what you're asking exactly on b. but with respect to c. they're completely different things. There is no map from int to List[int] implied at the definition; what the definition gives you is, for each map f: X -> Y, a map List(f) : List[X] -> List[Y]; what Unit gives you is a way of viewing any type X as some particular kind of Lists of X's, those with one element.
Hope it helps; from the List[] notation you use, maybe you come from a Scala/Java background, if this is the case you may find this intro to category theory in Scala interesting: http://www.weiglewilczek.com/blog/?p=2760
Well, what is really confusing is, functor F between Cat A and Cat B isdefined as:
a mapping:
F maps A to F(A) --- does it mean new List()? or why not?
and F maps F(f) : F(A) -> F(B)
This is how I see those as being defined in the books. Point #1 above (F maps A to F(A)) - that reads to me like a morphism to convert A into F(A). If that is the case, why do we need unit natural transformation, to go from A to F(A)?
What is very curious is that the functor definition uses the word map (but does not use the word morphism). I see that A to F(A) is not called a morphism but a map.