Generalizing a non linear model in pyomo - pyomo

I would like to know if there is a way to take a non linear model in pyomo format from the user, and load it into the pyomo optimization code. I'm asking this since i'm dealing with non linear models, and need to develop a generic code to implement a particular type of optimization using these models. Kindly help me out.
Thank you

Related

Prediction models, Objective functions and Optimization

How do we define objective functions while doing optimization using pyomo in Python. We have defined Prediction models separately. Next step is to bring objective functions from prediction models (Gradient boosting, Random forest , Linear regression and others) and optimize to achieve maximum and minimum optimization. please suggest and share any working example in pyomo.
Due to Pyomo use algebraic expression you should:
Define the mathematical expression of your prediction model function.
Implement the proper mathematical model in Pyomo including the needing parameters, variables and other constraints.
Apply the min - max
You can make a cycle as follows:
Prediction model function -> Min-max refinement -> Prediction model function adjustment -> Min-max refinement -> ...
As many times you need to reach yor expected accuracy. API connection and multi-thread implementation could work.

How do I answer this Computer Vision Theoretical question?

Suppose there is an image containing multiple objects of different types. The objective of the problem is to recognize objects using primary features of objects (colour, texture, shape). Explain your own idea what concepts will you apply, and how will you apply them, to differentiate/classify the objects in the image by extracting primary features (or combination of features) of objects. Also, justify how your idea can produce the best accuracy.
Since this is a theoretical question, it can have many answers. The simplest approach is to use k-means or weighted k-means, using the features you have. If you have quite unique features then k-means would be able to classify decently accurately. You might still have to juggle around finding how you would input some of the more esoteric features to k-means though. Other more involved methods would use your own trained model using CNN for classification using the features you provide.
Since this is a theoretical question this is all the answer I can provide you with.

OpenCV SVM - object does not belong to any of trained classes

I'm using OpenCV (3.1) SVM with 3 classes. Is there any way how to handle input data, which does not belong to any of these classes? Is there posibility to get probability from the prediciton?
I just simply want to mark data from unknown class as "Does not belong to any of trained classes".
Thank you
Looking at the SVM docs(the predict function, in particular), it seems that the best you can do is get the distance from the support vector, and it looks like you can only even get that from a binary classifier.
Not sure how constrained to OpenCV you are, but if you can use scikit learn for your problem, their SVM has a predict_proba function that should be helpful. There is also a predict_log_proba function, if that's your preference. Also, note that you'll need to set probability=true when calling the fit function if you go this route.
If you're contrained to C/C++, you might look into LibSVM, as they also have the ability to give the probabilities, although I'm not as familiar with their api. Also note that the OpenCV and scikit learn implementations are both based on LibSVM
Hope one of these works for you!

How can I analyze a nonstructured text?

I use TF-IDF to affect weight that can help me to construct my dictionary. but my model is not really good enough because I have unstructured text.
Any suggestions about TF-IDF similar algorithms?
When you say, your model is not good enough, does it mean that your generated dictionary is not good enough? Extracting key terms and constructing the dictionary using TF-IDF weight is actually feature selection step.
To extract or select features for your model, you can follow other approaches like principle component analysis, latent semantic analysis etc. Lot of other feature selection techniques in machine learning can be useful too!
But I truly believe for sentiment classification task, TF-IDF should be a very good approach to construct the dictionary. I rather suggest you to tune your model parameters when you are training it rather than blaming the feature selection approach.
There are many deep learning techniques as well that are applicable for your target task.

Are Markov Random Fields implemented in OpenCV?

Markov Random Fields are a really popular way to look at an image, but I can't find a direct reference to them being implemented in OpenCV. Perhaps they are named differently, or are built from some indirect method.
As the title states, are MRFs implemented in OpenCV? And if not, what is the popular way to represent them?
OpenCV deals mostly with statistical machine learning rather than things that go under the name Bayesian Networks, Markov Random Fields, or graphical models.