Is there a Stata module or code available for the Expectation Maximization (EM) algorithm? I cannot seem to find any, but I thought it was worth checking in.
My interest is in EM for record linkage. See, for example:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1479910/
Usual name: expectation-maximization.
There is not a general command or set of commands providing a framework for applications of EM. Rather, the EM algorithm is used within the code for various commands.
Related
Is there a way to output the final tableau in python with docplex library? If not, is there a work around?
I want to use dual simplex method to solve linear programming problem with newly added constraints. So, I would need to access the final tableau to decide which variable to exit the basis, without having to re-solve the problem from scratch.
This sort of low level interaction cannot be done at the docplex level. In order to do this you can use Model.get_cplex() to get a reference to the underlying engine object. With this you can then get additional information. You can find the reference documentation for this class here. You probably want to look at the solution, solution.basis, solution.advanced properties. This should give you all the information you need.
Note that the engine works with an index oriented model in which every variable or constraint is just a number. You can convert docplex variable objects by using Model.get_var_by_index().
I also wonder whether you may want drop docplex and instead directly use the CPLEX Python API. You can find documentation of this here.
I have programmed an algorithm that finds feasible points for mixed-integer convex optimization problems. Now I wish to compare it with the Feasibility Pump for mixed-integer nonlinear programms on a testbed from the MINLPlib library.
I have access to the BONMIN solver from the Coin OR project, where the Feasibility Pump is also implemented, via Pyomo. Here is a list of possible options for this solver.
My questions are
Are the following options correct to test (plain vanilla) feasibility pump?
opt = SolverFactory('bonmin')
opt.options['bonmin.algorithm'] = 'b-ifp' # Iterated Feasibility Pump as solver
opt.options['bonmin.pump_for_minlp'] = 'yes' # Is this needed?
opt.options['bonmin.solution_limit'] = '1' #For terminating after 1st feasible point
If not, any hint how to do it correctly is appreciated.
How do I access the number of iterations (i.e. pumping cycles) of the feasibility pump? I can see iteration information in the print output but it would be very helpful, if it was stored in some variable.
Pyomo calls Bonmin through the ASL (AMPL solver library) interface. Therefore, whatever options would work for AMPL should be the same ones that are appropriate here.
As for the iteration information, there are various ways of capturing the print output and parsing it to retrieve the desired information. The most straightforward way may be to pipe the output to a file and read it as part of a small post-processing script/function.
I am new to quantlib, but I have relatively good understanding of C++. To put my question in some sort of context, I can inform you that I am actually trying to implement the portfolio CVA calculation method proposed by Giovanni Cesari (see link below) for a simple portfolio consisting of one interest rate swap (as a starting point).
I working with the Cheyette (quasi-Gaussian) interest rate model. The model has been implemented as a "new" stochastic process class and the simulated paths stored/generated as a multipath class.
However, as Cesari's method is based on LS-Monte Carlo, I need to write a code that can generate the resulting cash flow sequence from an interest rate swap under each scenario. I do not know if there is some efficient way to do this in quantlib, but I have a feeling that there is. I guess one should be able let the scenarios generated, populate Quantlib quotes which are linked to an instance of the quantlib term structures class. Hopefully I can use this together with the swap class in order to obtain the realized cash flows (basically the floating leg, as the fixed leg is deterministic) from the swap.
Any idea of how this can be done will be highly appreciated!
Link to Cesari's presentation (see page 21 to 26): http://sfi.epfl.ch/files/content/sites/sfi/files/shared/Swissquote%20Conference%202010/Cesari_talk.pdf
I am looking for a way either in python or C++ to learn the parameters of an AR-HMM model.
There are many package to do fitting and inference for HMM when emission probabilities are only conditional on the hidden state but I cannot seem to find one when emission probabilities are conditional on both the state and prior observations.
Available :
(source: tugraz.at)
Cannot find:
AR-HMM http://mayer.pro/files/HMM-web/HMM-ar.png
Does anyone know of such a package, the goal is to fit a regime switching AR(p) model. If it's possible to do regime switching ARMA(p,q) it's even better.
Python/Cpp preferred. Open source .
I'm using libsvm library in my project and have recently discovered that it provides out-of-the-box cross validation.
I'm checking the documentation and it says clearly that I have to call svm-train with -n switch to use CV feature
.
When I call it with -v switch I cannot get a model file which is needed by svm-predict.
Implementing Support Vector Machine from scratch is beyond the scope of my project, so I'd rather fix this one if it is broken or ask the community for support.
Can anybody help with that?
Here's the link to the library, implemented in C and C++, and here is the paper that describes how to use it.
Cause libsvm use cv only for parameter selection.
From libsvm FAQ:
Q: After doing cross validation, why there is no model file outputted ?
Cross validation is used for selecting good parameters. After finding them, you want to re-train the whole data without the -v option.
If you are going to use cv for estimating quality of classifier on your data you should implement external cross validation by splitting data, train on some part and test on other.
It's been a while since I used libsvm so I don't think I have the answer you're looking, but if you run the cross-validation and are satisfied with the results, running lib-svm with the same parameters without the -v will yield the same model.