Differentiating in pyomo - pyomo

How can we differentiate constraints defined in pyomo with respect to particular variables, and multiply those expressions with another pyomo model component. I want to generate a constraint that involves the derivative of other constraints, ie: l1*dg1/dz + l2*dg2/dz = 0, where l1 and l2 are pyomo variables, g1 and g2 are other constraints in the model. Kindly help me out. Thank you.

Take a look at the differentiate function in pyomo.core.base.symbolic. There are a few faster implementations in the pipeline, but this should give you what you need. For a usage example, you can take a look at the GDPopt solver code.

Related

Z3 OCaml API: difference between mk_solver_s, mk_solver and mk_simple_solver

Good afternoon,
I am using Z3 OCaml bindings to verify properties on rational values. I noted vast differences when I initialized my solver using mk_solver_s ctx "QF_NRA" and mk_simple_solver (context here is let ctx = mk_context [("model","true");("unsat_core","true")]).
Or what can be sat/unsat. Assuming you don't get a "unknown symbol" kind of issue, in general there should be no difference in sat/unsat answers, but performance can vary depending on logic selection.
More specifically, I perform branch and bound exploration on possible activations for a neural network. Activation here is to be understood as whether the input of a certain function is positive or negative; result of this function then yield different behaviours for the neural network and gives some linear constraints on the inputs.
The neural network linear part is written as an SMT formula. Then, each time I meet a possible activation, I can check if it is possible or not according to previous already met activations. If one activation is possible, the relevant constraint are added to the solver stack and proceed. If two activations are possible, the solver is cloned, and I proceed with the two solvers with an additional constraint each.
By using mk_solver_s ctx QF_NRA, I had much more possible activations than with mk_simple_solver ctx (actually, 2^n where n is the number of neurons); models obtained with the first one were not taking into account some constraints I added. For instance, I had
(> |CELL_actual_input_0_0_0_1| (/ 1.0 2.0))
(< |CELL_actual_input_0_0_0_1| 2.0)
(> |CELL_actual_input_0_0_0_0| (/ 1.0 2.0))
(< |CELL_actual_input_0_0_0_0| 2.0)
in my solver stack, but one of my models shown
(define-fun |CELL_actual_input_0_0_0_0| () Real
0.0)
(define-fun |CELL_actual_input_0_0_0_1| () Real
0.0)
Changing only the solver initialization function remove this behaviour.
The documentation (here: https://z3prover.github.io/api/html/ml/Z3.Solver.html) lacks any explanation regarding this; maybe I am not looking at the good place.
I was wondering what are the differences between the following functions:
mk_simple_solver
mk_solver_s (which seems to accept only theories string, but it would seem regarding this https://github.com/Z3Prover/z3/issues/1035#issuecomment-303160975 that it actually accepts a whole lot of tactics, which I am not sure how to use)
mk_solver
What are the "defaults" that mk_simple_solver sets and mk_solver_s does not do?
I would be eager to do a pull request for enhancing the OCaml API, I am not quite sure where to look at to begin with.
Thank you in advance :)
You can answer some of these questions by looking at their implementations:
let mk_solver ctx logic =
match logic with
| None -> Z3native.mk_solver ctx
| Some x -> Z3native.mk_solver_for_logic ctx x
let mk_solver_s ctx logic = mk_solver ctx (Some (Symbol.mk_string ctx logic))
let mk_simple_solver = Z3native.mk_simple_solver
So, the string in mk_solver_s lets you pick which logic you want. (Not "theories" which are different. Logics can be thought of as combinations of theories, see the SMTLib site for details: http://smtlib.cs.uiowa.edu/theories.shtml vs http://smtlib.cs.uiowa.edu/logics.shtml)
So, mk_solver_s is exactly the same as mk_solver, except it allows you to start with a given logic. (The logic selection mostly matters as to which symbols are available in the solver, as some terms only make sense in certain logics. For instance, you cannot use quantifiers in any logic that declares itself to be quantifier-free, etc.)
You said you noticed "vast differences" using these, but did not elaborate what those differences are? You mean performance? Or what can be sat/unsat. Assuming you don't get a "unknown symbol" kind of issue, in general there should be no difference in sat/unsat answers, but performance can vary depending on logic selection. (For instance, picking a difference-logic can make a huge impact on constraints that don't need anything else.) But without details it's hard to opine.
Hope this gets you started. Sometimes the best thing to do is to look at the source code itself!

Either or constraint in linear programming PULP

I am new to LP and also to using pulp. I'm trying to add an either/or constraint of the below form:
x ==0 or x >=35
I understand introducing a binary decision variable might help, but the model won't be a linear problem anymore. Are there other python packages I can use in that case?

Can I add a term to the Lagrangian calculation in Pyomo/PySP?

I would like to use Pyomo's PySP framework to do some stochastic optimization. In this model, I have some variables that must be the same across scenarios (i.e., the standard root node variables). As part of the Progressive Hedging approach, PySP creates an augmented Lagrangian, whose multipliers are adjusted iteratively until all these variables are equal across scenarios. All good so far. But I also have some constraints that must be enforced on an expected value basis. In the extensive form, these look like this:
sum(probability[s] * use[s] for s in scenarios) == resource
This complicating constraint could be factored out with a Lagrangian relaxation. This would require adding a term like this to the main objective function (which would then become part of each scenario's objective function):
(
lambda * (sum(probability[s] * use[s] for s in scenarios) - resource)
+ mu/2 * (sum(probability[s] * use[s] for s in scenarios) - resource)**2
)
This is very similar to the Lagrangian terms for the nonanticipativity constraints that are already in the main objective function. At each iteration, the PySP framework automatically updates the multipliers for the nonanticipativity terms and then propagates their values into the individual scenarios.
So my question is, is there some way to add my terms to the standard Lagrangian managed by PySP, and have it automatically update my multipliers along with its own? I don't mind doing some heavy lifting, but I can't find any detailed documentation on how PySP is implemented, so I'm not sure where to start.
The PH implementation in PySP supports some level of customization through the use of user-defined extensions. These are classes you can implement whose methods are called by PH at different points in the algorithm. You can tell PH to use an extension by setting the command-line option "--user-defined-extension" to a file that contains an implementation. A number of examples can be found here (look for files that contain IPHExtension and copy what they do).
Unfortunately, there is not any specific code that will make what you want to do easy. You will have to look at source code to see how PH updates and manages these objective parameters (see ph.py for this and to see where different extension methods are called in the algorithm).

Proper flow control in Prolog without using the non-declarative if-then-else syntax

I would like to check for an arbitrary fact and do something if it is in the knowledge base and something else if it not, but without the ( I -> T ; E)syntax.
I have some facts in my knowledge base:
unexplored(1,1).
unexplored(2,1).
safe(1,1).
given an incomplete rule
foo:- safe(A,B),
% do something if unexplored(A,B) is in the knowledge base
% do something else if unexplored(A,B) is not in the knowledge base
What is the correct way to handle this, without doing it like this?
foo:-
safe(A,B),
( unexplored(A,B) -> something ; something_else ).
Not an answer but too long for a comment.
"Flow control" is by definition not declarative. Changing the predicate database (the defined rules and facts) at run time is also not declarative: it introduces state to your program.
You should really consider very carefully if your "data" belongs to the database, or if you can keep it a data structure. But your question doesn't provide enough detail to be able to suggest anything.
You can however see this example of finding paths through a maze. In this solution, the database contains information about the problem that does not change. The search itself uses the simplest data structure, a list. The "flow control" if you want to call it this is implicit: it is just a side effect of Prolog looking for a proof. More importantly, you can argue about the program and what it does without taking into consideration the exact control flow (but you do take into consideration Prolog's resolution strategy).
The fundamental problem with this requirement is that it is non-monotonic:
Things that hold without this fact may suddenly fail to hold after adding such a fact.
This inherently runs counter to the important and desirable declarative property of monotonicity.
Declaratively, from adding facts, we expect to obtain at most an increase, never a decrease of the things that hold.
For this reason, your requirement is inherently linked to non-monotonic constructs like if-then-else, !/0 and setof/3.
A declarative way to reason about this is to entirely avoid checking properties of the knowledge base. Instead, focus on a clear description of the things that hold, using Prolog clauses to encode the knowledge.
In your case, it looks like you need to reason about states of some search problem. A declarative way to solve such tasks is to represent the state as a Prolog term, and write pure monotonic rules involving the state.
For example, let us say that a state S0 is related to state S if we explore a certain position Pos that was previously not explored:
state0_state(S0, S) :-
select(Pos-unexplored, S0, S1),
S = [Pos-explored|S1].
or shorter:
state0_state(S0, [Pos-explored|S1) :-
select(Pos-unexplored, S0, S1).
I leave figuring out the state representation I am using here as an easy exercise. Notice the convenient naming convention of using S0, S1, ..., S to chain the different states.
This way, you encode explicit relations about Prolog terms that represent the state. Pure, monotonic, and works in all directions.

How can you structure a script to identify like algebraic terms?

I'm trying to write a script that in some way represents algebraic expressions, and I'm trying to make it as general as possible so that it can accommodate, eventually, things like multivariable expressions, e.g. xy^2 = z and other things like trig functions. However, I need my script to be able to simplify expressions, e.g. simplifying x^2 + 2x^2 = 3x^2 and in order to that I need it to recognize like terms. However, in order to get it to recognize like terms I need it to be able to tell me when two expressions are identical, even if they don't look the same. So for instance I need == to be defined in such a way that the computer will know that (x^2)^2 is x^4.
Now so far, the only way that I can see to make a computer know when two algebraic expressions are identical like this, is to try to create some kind of a "normal form" for all expressions, and then compare the normal forms. So for instance, if I distribute all exponents over multiplication, multiply powers of sums, distribute multiplication over addition, and calculate all simple expressions of just numbers, then this might be at least close to something like a normal form. So for example the normal form of (x^2)^2 would be x^4 and the normal form of x^4 would be x^4. Since they have the same normal form, the computer can tell me they're equivalent expressions. It would say the normal form of (2x)^2+x^2 is 4x^2+x^2 and so wouldn't recognize that this normal form is the same as the normal form of 5x^2, though.
I'm thinking, at this stage I could try to define some "weak" notion of equality, that of equality of normal-form-components. Use this notion of equality, group like terms in the normal form, and this would get me a more universally correct normal form.
But all of this sounds like an absolute ton of work. So far I've defined classes for Expressions, which have subclasses of Variables, Sums, Products, powers, and so on, and right now I'm about 1/4 of the way through defining the function that would produce the normal form of a power object--I haven't even begun on the normal form for a Sum or Product class--and already the code is many pages long, and I'm still not sure that it'll ultimately work the way I want it to.
So my question is, how do you accomplish this goal? Will my current method work? Does anyone know how software like Wolfram|Alpha or the sympy package accomplish this functionality?