export opencv Gaussian mixture model of backgroundsubtractorMOG2 - c++

I'm using the MOG2 algorithm for background subtration in opencv 3.2. I want to look at the learned gaussian mixture models(weights, variance, mean...), however opencv does not provide built-in function to look at the values.
In bgfg_gaussmix2.cpp source code, I found the variables I need in class BackgroundSubtractorMOG2Impl:
UMat u_weight;
UMat u_variance;
UMat u_mean;
UMat u_bgmodelUsedMod
So what I did is just put some functions that can return the variables(in the same format as the built-in functions)
virtual UMat getWeight() const { return u_weight; }
virtual UMat getVariance() const { return u_variance; }
virtual UMat getMean() const { return u_mean; }
virtual UMat getNmodes() const {return u_bgmodelUsedModes;}
Then I also added the following to the background_segm.hpp file, in class CV_EXPORTS_W BackgroundSubtractorMOG2:
CV_WRAP virtual UMat getWeight() const = 0;
CV_WRAP virtual UMat getVariance() const = 0;
CV_WRAP virtual UMat getMean() const = 0;
CV_WRAP virtual UMat getNmodes() const = 0;
After this, I build and install the modified opencv source code. Then I wrote my own program to run the backgroundubtractorMOG2 algorithm. I can call the new functions I added to the opencv source code, but the returned UMats are just full of zeros.
So my question is how to get the UMats in the right way?

Related

How to mock OpenCV camera using GMock, or how test method that use camera with GTest?

I have written a wrapper for OpenCV library. I create Camera class that allows to use hardware camera.
The tested method is sth like this:
bool Camera::Open(int idx) {
cam_ = cv::VideoCapture(idx)
if (cam_.isOpened())
return true;
return false;
}
I want to test Open() method using GTest, but I don't want to test it with real physical camera. I think that the best way is to use GMock, but I really don't know how to mock camera.
One option is to create an interface for the camera and a factory function:
class ICam {
virtual bool isOpened() const = 0;
};
class ICamFactory {
virtual std::unique_ptr<ICam> VideoCapture(int idx) = 0;
};
You're camera class is then constructed with a CamFactory which implements ICamFactory. In your test you can create a MockCamFactory which returns a MockCam. You can then set expectations on MockCamFactory and MockCam:
EXPECT_CALL(mockCamFactory, VideoCapture(idx))
.WillOnce(ReturnNew<MockCam>());
EXPECT_CALL(mockCam, isOpened())
.WillOnce(Return(true));
ASSERT_TRUE(camera.IsOpen(idx));
Your real Cam class then wraps the calls to OpenCV.

C++ shared_ptr interesting behavior when passing into function (solved but curious)

Background
I'm writing some C++ code for a project that involves the simulation of a lot of State Space mathematical models. Because there are several different types of models that exist with varying degrees of complexity (ie non-linear/linear & time varying/non-varying) I created a polymorphic class structure that would be able to represent various features of each type while still keeping the common properties of the matrices bundled together nicely. This way I can generalize easily to various model types for simulation inside of OdeInt.
The current version of the simple base class looks like this:
class SS_ModelBase
{
public:
virtual Eigen::MatrixXd getA() = 0;
virtual Eigen::MatrixXd getB() = 0;
virtual Eigen::MatrixXd getC() = 0;
virtual Eigen::MatrixXd getD() = 0;
virtual Eigen::MatrixXd getX0() = 0;
virtual int getNumInputs() = 0;
virtual int getNumOutputs() = 0;
virtual int getNumStates() = 0;
private:
};
typedef boost::shared_ptr<SS_ModelBase> SS_ModelBase_sPtr;
Each derived class for the mentioned types then have their own variables for actually storing specific representations of the matrices A, B, C, D, & X0. The inputs, outputs, states are used for some dimensional information that might be needed for later manipulation. This has been working extremely well for me in practice.
An example of how I would use the model in the simulator is as shown:
Eigen::MatrixXd StateSpaceSimulator::stepResponse(const double start, const double stop, const double dt,
const SS_ModelBase_sPtr& model, const PID_Values pid)
{
/* Due to how Odeint works internally, it's not feasible to pass in a full state space model object
and run the algorithm directly on that. (Odeint REALLY dislikes pointers) Instead, a local copy of the
model will be made. For now, assume that only NLTIV models are handled. */
SS_NLTIVModel localModel(model);
/* Copy the relevant data from the model for cleaner code in main sim*/
Eigen::MatrixXd x = localModel.getX0();
Eigen::MatrixXd A = localModel.getA();
Eigen::MatrixXd B = localModel.getB();
Eigen::MatrixXd C = localModel.getC();
Eigen::MatrixXd D = localModel.getD();
//Other irrelevant code that actually performs numerical simulations
}
The custom copy constructor in the above code looks like this:
/* Copy constructor */
SS_NLTIVModel(const SS_ModelBase_sPtr& base)
{
inputs = base->getNumInputs();
outputs = base->getNumOutputs();
states = base->getNumStates();
A.resize(states, states); A.setZero(states, states);
B.resize(states, inputs); B.setZero(states, inputs);
C.resize(outputs, states); C.setZero(outputs, states);
D.resize(outputs, inputs); D.setZero(outputs, inputs);
X0.resize(states, 1); X0.setZero(states, 1);
U.resize(inputs, 1); U.setZero(inputs, 1);
}
Question
I noticed in some of my simulations that I would get memory access violation errors in the copy constructor when trying to resize the various matrices. std::malloc() gets called deep in the Eigen framework and was throwing errors. After many hours pounding my head against the wall trying to figure out why the Eigen software was suddenly not working, I stumbled upon a solution, but I don't know why it works or if there is a better way.
My old broken function signature looked like this:
Eigen::MatrixXd StateSpaceSimulator::stepResponse(const double start, const double stop, const double dt,
SS_ModelBase_sPtr model, PID_Values pid)
The new working function signature looks like this:
Eigen::MatrixXd StateSpaceSimulator::stepResponse(const double start, const double stop, const double dt,
const SS_ModelBase_sPtr& model, const PID_Values pid)
What's the run time difference between these two that could be causing my error? I know the first will increment the reference counter for the shared_ptr and the second will not, but other than that, I'm not sure what would change the behavior enough to cause my problem...thoughts?

OpenCV C++, Define a background model

I am busy converting and interpreting software used in previous years of my final year project.
I would just like to check if it is possible to define a background model in a header file, as i am currently getting an error.
class CWaterFill
{
public:
void Initialise();
Mat ContourFilter(Mat Img, int minSize);
Mat superminImg;
protected:
BackgroundSubtractorMOG2 m_bg_model;//Define the background model.
};
It is then used in the .cpp file in the following function:
void CWaterFill::GMM2(Mat InputImg, int nFrame, double learnRate)
{
m_bg_model(InputImg, m_fgmask, learnRate);//m_fgmask outlook is
}
Use a pointer to an abstract BackgroundSubtractor object:
...
protected:
cv::Ptr<BackgroundSubtractor> m_bg_model;//Define the background model.
And then create the concrete type, e.g.:
m_bg_model = createBackgroundSubtractorMOG2(20, 16, true);

Create Super Resolution image from multiple images using superres

I have a sequence of images and I extract a card from it. After this process the card will be correctly aligned and projected back to a plane (warpPerspective). However the quality is too low to e.g. read text from that card. Thus I tried to use the superres module to increase the resolution, however the documentation is pretty shallow and I have yet to find out how I can pass multiple images to the algorithm.
I tried to implement a custom FrameSource which is basically an adapter to a std::vector but for some reason I get a segfault.
class InterFrameSource : public superres::FrameSource {
std::vector<cv::Mat> frames;
std::vector<cv::Mat>::iterator iter;
public:
InterFrameSource(std::vector<cv::Mat> _frames) : frames(_frames)
{
reset();
}
virtual void nextFrame(OutputArray _frame)
{
_frame.getMatRef().setTo(*iter);
++iter;
}
virtual void reset() {
iter = frames.begin();
}
};
Edit
The cv::Mat are all CPU-only.
OK, after two days I finally got it. I needed to inverse the copying logic:
virtual void nextFrame(OutputArray _frame)
{
if (iter == frames.end()) return;
iter->copyTo(_frame);
++iter;
}

OpenCV - How to implement FeatureDetector interface

I am using the C++ implementation of OpenCV 2.4.6.1 for Ubuntu 12.10 on a x86_64 architecture. I am working on wrapping this code of the Agast Corner Detector inside a class inheriting from cv::FeatureDetector.
Inspecting the feature2d module header code and observing other implementations, I found I should mandatory implement the detectImpl method:
virtual void detectImpl( const Mat& image, std::vector<KeyPoint>& keypoints, const Mat& mask=Mat() ) const = 0;
Usually it is also implementented a method named operator having the following signature:
CV_WRAP_AS(detect) void operator()(const Mat& image, CV_OUT std::vector<KeyPoint>& keypoints) const;
Looking at other implementations I cannot say exactly what each of this methods should do. The second one operator I guess is somehow related to the detect method that is called for detecting keypoints:
cv::Ptr<cv::FeatureDetector> detector = cv::FeatureDetector::create("...");
detector->detect(img, keypoints);
According to your experience what's the difference between this two methods and what should each of them implement?
Related to the detector's instantiation using the factory method cv::FeatureDetector::create I have some clues related to the attribute info of type AlgorithmInfo* usually put as a public attribute of the detector class implementation, and using the CV_INIT_ALGORITHM in the features2d_init source file.
What should I implement in order to be able to instantiate my custom FeatureDetector using the factory method?
Finally after a few days of work I succeeded on my commitment and learned a few lessons about implementing cv::FeatureDetector interface:
Include the wrapping class into the cv namespace.
The only method mandatory to implement is detectImpl using the signature of the method on the OpenCV version you are using.
Implementing the operator method is optional, in other implementations where it is used (e.g. MserFeatureDetector and StarDetector) this method is called from detectImpl through the class instance:
void ...::detectImpl( const Mat& image, std::vector<KeyPoint>& keypoints, const Mat& mask ) const {
...
(*this)(grayImage, keypoints);
...
}
void ...::operator()(const Mat& img, std::vector<KeyPoint>& keypoints) const{
...
}
Be aware that detectImpl is a const method and hence it cannot modify instance parameters so It might turn useful defining the concrete behavior of the detector on a side function as done in other detector implementations (e.g. FastFeatureDetector or StarDetector)
To enable the wrapper to be instantiated using the factory method cv::FeatureDetector::create you should add to your class declaration the public method AlgorithmInfo* info() const; and initialize the class as an algorithm inside OpenCV using the CV_INIT_ALGORITHM as follows:
namespace cv{
CV_INIT_ALGORITHM(AgastFeatureDetector, "Feature2D.AGAST", obj.info()->addParam(obj, "threshold", obj.threshold); obj.info()->addParam(obj, "nonmaxsuppression", obj.nonmaxsuppression); obj.info()->addParam(obj, "type", obj.type));
}
If your class doesn't needs any parameter you can simply substitute all the params part by obj.info()
Remind also to do this outside the source files where you are declaring (.h) or defining (.cpp) your wrapper and include the opencv2/core/internal.hpp library.