Markov Random Fields are a really popular way to look at an image, but I can't find a direct reference to them being implemented in OpenCV. Perhaps they are named differently, or are built from some indirect method.
As the title states, are MRFs implemented in OpenCV? And if not, what is the popular way to represent them?
OpenCV deals mostly with statistical machine learning rather than things that go under the name Bayesian Networks, Markov Random Fields, or graphical models.
Related
In Pose Estimation Using Associative Embedding technique I still don't have clarity regarding How we can group the detected points from HeatMaps to Individual Human Poses using Associative Embeddings Layer. Is there any code that clearly gives Idea regarding this ? I'm Using EfficientHRNet approach for Pose Estimation.
Extracted KeyPoints from Heatmaps and need to group those points into individual poses using Embedding Layer Output.
From OpenVINO perspective, we could offer:
This model: human-pose-estimation-0007
This IE demo: Human Pose Estimation Python* Demo
This model utilized the Associative Embedding technique.
However, if you want to build it from scratch, you'll need to design your own Deep Learning architecture, implement and train the neural network.
This research paper might give you some insight into things that you need to decide (eg batch, optimization algorithm, learning rate, etc).
Suppose there is an image containing multiple objects of different types. The objective of the problem is to recognize objects using primary features of objects (colour, texture, shape). Explain your own idea what concepts will you apply, and how will you apply them, to differentiate/classify the objects in the image by extracting primary features (or combination of features) of objects. Also, justify how your idea can produce the best accuracy.
Since this is a theoretical question, it can have many answers. The simplest approach is to use k-means or weighted k-means, using the features you have. If you have quite unique features then k-means would be able to classify decently accurately. You might still have to juggle around finding how you would input some of the more esoteric features to k-means though. Other more involved methods would use your own trained model using CNN for classification using the features you provide.
Since this is a theoretical question this is all the answer I can provide you with.
I'm using OpenCV (3.1) SVM with 3 classes. Is there any way how to handle input data, which does not belong to any of these classes? Is there posibility to get probability from the prediciton?
I just simply want to mark data from unknown class as "Does not belong to any of trained classes".
Thank you
Looking at the SVM docs(the predict function, in particular), it seems that the best you can do is get the distance from the support vector, and it looks like you can only even get that from a binary classifier.
Not sure how constrained to OpenCV you are, but if you can use scikit learn for your problem, their SVM has a predict_proba function that should be helpful. There is also a predict_log_proba function, if that's your preference. Also, note that you'll need to set probability=true when calling the fit function if you go this route.
If you're contrained to C/C++, you might look into LibSVM, as they also have the ability to give the probabilities, although I'm not as familiar with their api. Also note that the OpenCV and scikit learn implementations are both based on LibSVM
Hope one of these works for you!
We're working on a machine learning project in which we'd like to see the influence of certain online sample embedding methods on SVMs.
In the process we've tried interfacing with Pegasos and dlib as well as designing (and attempting to write) our own SVM implementation.
dlib seems promising as it allows interfacing with user written kernels.
Yet kernels don't give us the desired "online" behavior (unless that assumption is wrong).
Therefor, if you know about an SVM library which supports online embedding and custom written embedders, it would be of great help.
Just to be clear about "online".
It is crucial that the embedding process will happen online in order to avoid heavy memory usage.
We basically want to do the following within Stochastic subGradient Decent(in very general pseudo code):
w = 0 vector
for t=1:T
i = random integer from [1,n]
embed(sample_xi)
// sample_xi is sent to sub gradient loss i as a parameter
w = w - (alpha/t)*(sub_gradient(loss_i))
end
I think in your case you might want to consider the Budgeted Stochastic Gradient Descent for Large-Scale SVM Training (BSGD) [1] by Wang, Crammer, Vucetic
This is because, as specified in the paper about the "Curse of Kernelization" you might want to explore this option instead what you have indicated in the pseudocode in your question.
The Shark Machine Learning Library implements BSGD. Check a quick tutorial here
Maybe you want to use something like dlib's empirical kernel map. You can read it's documentation and particularly the example program for the gory details of what it does, but basically it lets you project a sample into the span of some basis set in a kernel feature space. There are even algorithms in dlib that iteratively build the basis set, which is maybe what you are asking about.
I just wanted to know if any one had any pointers for a library or libraries that support Markov modelling and graphical graph representation, as for a project i must simulate a transport model and be able to develop an interface for it too. I am relatively new to c++.
Have a look at Boost Graph as it will simplify all your graph work. I am unsure if there is a Markov Chain algiorithm (I am looking for one too) but it should be easy to write once you have the graph -- a concurrent monte carlo approach maybe?
Numerical Recipes has many algorithm and code in both C and C++.
The graphviz tools are all you need to draw graphs.