Is there an advantage to using Blocks instead of using an additional index in Pyomo models? - pyomo

When we solve math programming models that have a block-diagonal structure (e.g., multi-period, dynamic optimization with finite elements), is there an advantage to use Blocks instead of an extra index for the variables and constraints? For example, does Pyomo index variables and constraints in such a way that it aids pivoting or factorization while interfacing with solver APIs?

Related

Prediction models, Objective functions and Optimization

How do we define objective functions while doing optimization using pyomo in Python. We have defined Prediction models separately. Next step is to bring objective functions from prediction models (Gradient boosting, Random forest , Linear regression and others) and optimize to achieve maximum and minimum optimization. please suggest and share any working example in pyomo.
Due to Pyomo use algebraic expression you should:
Define the mathematical expression of your prediction model function.
Implement the proper mathematical model in Pyomo including the needing parameters, variables and other constraints.
Apply the min - max
You can make a cycle as follows:
Prediction model function -> Min-max refinement -> Prediction model function adjustment -> Min-max refinement -> ...
As many times you need to reach yor expected accuracy. API connection and multi-thread implementation could work.

Can I combine PySP and pyomo.DAE to do stochastic dynamic programming?

It is unclear to me whether PySP and pyomo.DAE can be combined. I wish to use stochastic dynamic programming to model optimal stopping/real options valuation. I wish to use stochastic differential equations, geometric Brownian motion, and the Bellman equation. I get that PySP does stochastic programming, and I get that pyomo.DAE does dynamic optimization. Does Pyomo have the built in capacity to do stochastic dynamic programming?
PySP and Pyomo.DAE can be combined but I'm not sure it's what you're looking for. See this paper on a few applications that combine them. The dynamic optimization support in Pyomo.DAE is not the same as dynamic programming (See the documentation here). If you could provide a small example of the exact problem structure you're trying to represent then I would be able to say more definitively if we support it or not.

higher dimensional arrays with runge_kutta4

I want to solve a system of coupled differential equations using boost::numeric::odeint::runge_kutta4. It is a 3D lattice system so it would be natural for me (and convenient) to work with 3D arrays. Is there a way for runge_kutta4 to work with user defined data structures or boost multi_array ?
In principle this is possible. odeint provideds a mechanism to use custom data structures - algebras and operations. Have a look here. Either you use one of the existing algebras and try to adapt your data structure to work with this algebra. Or you implement your own algebra and instantiate the Runge Kutta stepper with it.
You might also want to have a look at a library like Eigen, Mtl4, boost.ublas, or Armadillo. They might have data types for higher order tensors. For example Eigen works very well with odeint.

A single multi-object Kalman filter vs. multiple-single-object Kalman filters (plural)

Gidday cobbers/esteemed colleagues,
With multi-object tracking that implements Kalman prediction/correction the general approach I see suggested in other SO threads is to simply have a vector/array of Kalman filters for each object.
i.e. 'multiple-single-object Kalman filters'
But knowing that if you define your state space matrices correctly, states that are independent of each other will remain so once any (coherent) math is said and done - why don't we just augment the various state and associated matrices/vectors involved in a filter with all the object 'data' and use one Kalman filter? (yes, there will be lots of zeros in most of the matrices).
Is there any algorithmic complexity advantage either way? My intuition is that using one filter vs. many might reduce overhead?
Maybe is it just easier to manage in terms of human readability in dealing with multiple filters?
Any other reasons?
Thanks
p.s. eventual code will be in openCV/C++
If by augmenting you mean combining the states of all objects (both means and covariances) into a single super-state and then using a single filter for prediction/estimation of this super-state, then I am afraid your intuition about it being more efficient is most likely wrong.
You need to consider that KF equations involve operations such as matrix inversion, with O(n^3) (or very close to this figure) computational complexity where n is the dimension of the matrix. If you aggregate multiple objects into a single state, the computational complexity will skyrocket, even if there are mostly zeroes as you said.
Dealing with multiple filters, one per tracked object, is in my opinion both cleaner from the design standpoint and a more efficient approach. If you are indeed bottlenecked by KF performance (profile first), consider allocating the Kalman Filter data in a contiguous array to minimize cache misses.

Matrix Algebra Using C++

I'd like to ask about mathematical operations of arrays. I am mainly interested in carrying out operations such as:
vector products:
C=A+B
C=A*B
where A and B are arrays (or vectors), and
matrix products:
D=E*F;
where D[m][n], E[m][p], F[p][n];
Could anyone tell me what is the most efficient way to manipulate large quantities of numbers? Is it only possible by looping through the elements of an array or is there another way? Can vectors be used and how?
The C++ spec does not have mathematical constructs as you describe. The language surely provides all the features necessary for people to implement them. There are a lot of libraries out there, so its just up to you to choose one that fits your requirements.
Searching through stack overflow questions might give you an idea of where to start identifying those requirements if you don't know them already.
What's a good C++ library for matrix operations
Looking for an elegant and efficient C++ matrix library
High Performance Math Library for Vector And Matrix Calculations
Matrix Data Type in C++
Best C++ Matrix Library for sparse unitary matrices
Check out Armadillo, it provides lots of matrix functionality in a C++ interface. And it supports LAPACK, which is what MATLAB uses for linear algebra calculations.
C++ does not come with any "number aggregate" handling functionality out of the box, which the possible exception of std::valarray. (Compiler vendors could make valarray use vectorized operations, but generally speaking they don't)