Gurobipy vs Pyomo Discrepancy - pyomo

Summary: Using Gurobi, Pyomo and gurobipy generate equivalent .lp files but only the Pyomo version solves. How can I trace Pyomo generating the .lp file to work out the difference?
Hi Everyone,
I was hoping I could get a bit of insight. I've been using Pyomo to develop something in work using the Gurobi solver. I'm thinking of switching to the gurobipy package as I need to optimise a MIQCP model with multiple objectives.
Before I attempted to write the multi-object model in gurobipy I attempted to replicate the model I have in Pyomo but I've been failing miserably. The model .lp produced by Pyomo looks identical to the gurobipy model but the gurobipy model fails whereas the Pyomo version works as intended.
I've looked at the two .lp files generated by both approaches and they seem similar. To help me diagnose the problem I've tried to follow the workflow of Pyomo in generating the .lp file but I'm not having much success.
Happy to share any data/information to help diagnose the problem.
Appreciate any help, I'm starting to go insane haha.
Thanks,
Fraser :)

Related

How to use ML.Net PredictionEnginePool with ONNX model?

I am trying to use ML.Net to use ONNX models for prediction behind an API. There are documentation around how to use ML.Net & ONNX models with a console app here however, as it is described in this article, it wouldn't scale well. Since the article, they added PredictionEnginePool which solves the scaling problem, but I cannot make it work with ONNX models. When I try to load the model, it throws two exception:
InvalidOperationException: Repository doesn't contain entry DataLoaderModel\Model.key
Microsoft.ML.RepositoryReader.OpenEntry(string dir, string name)
InvalidOperationException: Could not load legacy format model
Microsoft.ML.ModelOperationsCatalog.Load(Stream stream, out DataViewSchema inputSchema)
The legacy format exception is interesting because I tried two different models, one from Azure Machine Learning Service with AutoML, and one with Scikit trained locally so not sure which part is "legacy".
The missing Model.key might be the hint though, because the zip model file that is used on the MS API documentation doesn't have a single .onnx file but it has folders with binary files and some of the files are actually named Model.key.
My question is:
Has anybody ever used PredictionEnginePool with ONNX models? Is it possible? Or it is not implemented yet? (Not sure if it matters but both are classification models, one SVM and one LightGBM)
*UPDATE
Found a way to do this. So it looks like the Engine Pool only supports models in ML.Net format, however you can open the model as it was described in the console app example and save it in ML.Net format, then you can use it with the engine pool.
There is a similar example for this here.
The OnnxModelConfigurator class opens the ONNX model and saves it in ML.Net format, then in the ctr of Startup.cs you call the configurator to save the model in the right format, and in the ConfigureServices() function you can actually create the pool with the ONNX model.
This works, however by following this approach, the conversion between the formats would be part of the API's source code, so you would need to at least restart the app when you want to use a new model. Which might not be a big deal, if a bit of downtime is ok and even if not, you can avoid it with deployment slots for example. You could also have the conversion as a separate service I guess and then just dump the model file to the API so the pool can detect the new model and use it.
Anyway, thanks for the answers guys!
I have run into your error before, but not using the Pool. If you look at this specific comment and the comments that follow, we resolved the issue by doing a full clean of his project. In that case, he had upgraded to a new version of ML.NET and didn't clean the project so it was causing issues. I am not sure if this will resolve your issue, but I am one of the engineers who works on ML.NET so if it doesn't please feel free to create an issue and we can help you resolve it.
You can also take a look at this guide.
In this case, a model trained using Azure Custom Vision is consumed within an ASP.NET application using PredictionEnginePool.

Django - Difference b/w Django-rpy2 & rpy2

Django is showing two rpy2 related packages
Django-rpy2
rpy2
Are they both different like i am familiar with rpy2 but what does Django-rpy2 offer? I am unable to find something in django-rpy2 documentation. What is it all about?
Thanks
Looks like django-rpy2 offers some features related to Django models but almost no documentation and the last package update is from July 2015, I don't recommend using it. Just use Django and Rpy2 separately (if you really have to use Rpy2). Check the source for more details (especially the models).

Coverage.py not giving valid results

I was hoping someone could explain how coverage.py works since after reading the documentation on it I am still rather confused. I am attempting to figure out code coverage by a TestCase class and the results haven't been very logical, particularly since after commenting out large chunks of tests the percent missing remains the same. I am working in the community version of PyCharm and haven't been able to find any alternatives to coverage.py so if you could recomend another option that would be appreciated too.

Adding Code Snippet in PentaHo kettle User defined java Class

I had written some custom code using java . I want to add the code to the User defined java class in Kettle in the code snippets section. Is there a way to add the custom code snippets in the Classes and code fragment in UDJC so that it can be reusable.
Thanks.
For the moment there is no ability to add code snippets using ui at runtime. You can submit issue with pentaho jira if you want this functionality. Or just as workaround you can edit codeSnippits.xml (situated under lib/kettle-ui-*.jar/org/pentaho/di/ui/trans/steps/userdefinedjavaclass/) and re-zip this file back to the jar.
I would not recommend going down this path.
The reason is very simple, UDJC in PDI is Janino, a rather minified (but super fast) Java compiler, and I quote the Pentaho wiki for User Defined Java Class:
Not 100% Java... The first thing to know is that Janino and as a
consequence this step doesn't need the complete Java class... the
most apparent limitation is the absence of generics
What happened if we'd been able to add code snippets on the fly? Probably not good things.
However, and this is very useful, consider wrapping your code in a JAR package as suggested in the comments, include it in the lib-ext folder of your PDI environment and import it to User Defined Java Classes at will. IMHO, this is the right way.
I hope this helps a bit.

Implementaation Prestack Kirchhoff Time Migration in java

I want to implement Prestack Kirchhoff Time Migration in java and later convert the code into Map-Reduce mechanism. Can someone give me some links for relative study or help me about how to implement this?