Is there a Mat arithmetic + operator alternative for UMat? - c++

I'am trying to applying homomorphic filter to my video player program.
While I was writing code using UMat, I found something incompatible with the code using the existing Mat.
in Mat code
cv::Mat temp;
someImage.convertTo(temp,CV_32FC1)
temp = temp + 0.01
temp = temp + 0.01
What does this mean?
And how can I using this option in UMat ?

OpenCV's operator+(const Mat& a, const Scalar& s) adds a scalar value to each element of the matrix. It's practically the same as calling void add(InputArray src1, InputArray src2, OutputArray dst, InputArray mask=noArray(), int dtype=-1).
InputArray interface accepts Mats and UMats as well as Scalars, so you can just call
cv::UMat temp(3, 3, CV_32FC1, cv::Scalar(0));
cv::add(temp, 0.01, temp);

Related

Writing template functions accepting inputs of type cv::Mat or cv::UMat

What is the simplest and cleanest way to write a function accepting input arguments of type Mat or UMat?
Should I use InputArray, use templates or is there a better alternative? I am currenlty having functions with identical implementation written for both Mat and UMat.
The function should take full advantage of the UMat abstraction over OpenCL, and running roughly as fast as if it was written just for UMats, and without the overhead of having to copy UMats to Mats.
An example of function which I could want to define for both Mat and UMat is the following (please do not propose refactoring to remove local Mat/UMat variables, this is just an example)
using namespace cv;
void foo(const Mat & in1, const Mat & in2, Mat & out)
{
Mat aux1;
Mat aux2;
exp(in1, aux1);
exp(in2, aux2);
add(aux1, aux2, out);
return;
}
void foo(const UMat & in1, const UMat & in2, UMat & out)
{
UMat aux1;
UMat aux2;
exp(in1, aux1);
exp(in2, aux2);
add(aux1, aux2, out);
return;
}
Suggested by #Miki and #Gianni, we can choose InpputArray and OutputArray for the general template types of Mat, Mat_<T>, Matx<T, m, n>, std::vector<T>, std::vector<std::vector<T> >, std::vector<Mat>, std::vector<Mat_<T> >, UMat, std::vector<UMat> or double.
void func(InputArray _src1, InputArray _src2, OutputArray _dst)
{
Mat src1 = _src1.getMat(), src2 = _src2.getMat();
CV_Assert( src1.type() == src2.type() && src1.size() == src2.size());
Mat aux1 = Mat(src1.size(), src1.type());
Mat aux2 = Mat(src1.size(), src1.type());
cv::exp(src1, aux1);
cv::exp(src2, aux2);
_dst.create(src1.size(), src1.type());
Mat dst = _dst.getMat();
cv::add(aux1, aux2, dst);
}
Now you can test it with Mat or UMat or even std::vector
int test(){
std::vector<float> vec1 = {1,2,3};
std::vector<float> vec2 = {3,2,1};
std::vector<float> dst;
func(vec1, vec2, dst);
// now dst is [22.8038, 14.7781, 22.8038]
}
I used a template:
using namespace cv;
// MatType is either cv::Mat or cv::UMat.
template<typename MatType>
void foo(const MatType& in1, const MatType& in2, MatType& out)
{
MatType aux1;
MatType aux2;
exp(in1, aux1);
exp(in2, aux2);
add(aux1, aux2, out);
return;
}
Advantages:
It is easy to declare local Mats or UMats depending on the input type.
No Mat ⟷ UMat conversion.
Disadvantage:
All matrix arguments must be of the same type, it is not possible to mix Mats and UMats.

OpenCV Applying Affine transform to single points rather than entire image

I've got a Affine transform matrix in OpenCV from the KeypointBasedMotionEstimator class.
It comes in a form like:
[1.0008478, -0.0017408683, -10.667297;
0.0011812132, 1.0009096, -3.3626099;
0, 0, 1]
I would now like to apply the transform to a vector< Pointf >, so that it will transform each point as it would be if they were in the image.
The OpenCV does not seem to allow transforming points only, the function:
void cv::warpAffine ( InputArray src,
OutputArray dst,
InputArray M,
Size dsize,
int flags = INTER_LINEAR,
int borderMode = BORDER_CONSTANT,
const Scalar & borderValue = Scalar()
)
Only seems to take images as inputs and outputs.
Is there a way I can apply an affine transform to single points in OpenCV?
you can use
void cv::perspectiveTransform(InputArray src, OutputArray dst, InputArray m)
e.g.
cv::Mat yourAffineMatrix(3,3,CV_64FC1);
[...] // fill your transformation matrix
std::vector<cv::Point2f> yourPoints;
yourPoints.push_back(cv::Point2f(4,4));
yourPoints.push_back(cv::Point2f(0,0));
std::vector<cv::Point2f> transformedPoints;
cv::perspectiveTransform(yourPoints, transformedPoints, yourAffineMatrix);
not sure about Point datatype, but the transformation must have double type, e.g. CV_64FC1
see http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#perspectivetransform too
it's a bit clumsy, but you can matrix-multiply your points manually:
// the transformation matrix
Mat_<float> M(3,3);
M << 1.0008478, -0.0017408683, -10.667297,
0.0011812132, 1.0009096, -3.3626099,
0, 0, 1;
// a point
Point2f p(4,4);
// make a Mat for multiplication,
// must have same type as transformation mat !
Mat_<float> pm(3,1);
pm << p.x,p.y,1.0;
// now , just multiply:
Mat_<float> pr = M * pm;
// retrieve point:
Point2f pt(pr(0), pr(1));
cerr << pt << endl;
[-6.67087, 0.645753]

wrong parameters for function remap

I have the following opencv code:
InputArray fish =InputArray(ra);
InputArray base = InputArray(ra);
Mat warp(Mat input) {
if (!f) {
fish = genfish(input);
f = true
}
if (!b) {
base = bladArray;
b = true
}
Mat ret;
InputArray i = InputArray(input);
OutputArray p;
remap(i, p, fish, bladArray, 0, BORDER_CONSTANT, Scalar());
}
The idea behind this code is applying the fish eye correction efficiently on the image (that's why the filter is buffered). The problem is that appently I'm passing the wrong arguments to the remap function. According to the documentation (page 273) we have the following signature:
void remap ( InputArray src , OutputArray dst , InputArray map1, InputArray map2 , int interpolation , int borderMode =BORDER_CONSTANT, const Scalar& borderValue =Scalar() )
now
InputArray src = i
OutputArray dst= p
InputArray map1=fish
InputArray map2=bladarray
int interpolation=0
int bordermode=BORDER_CONSTANT
Scaler borderValue=scaler();
which all seems to fit the signature yet it keeps telling me I have the wrong signature. What am I doing wrong?

Multi-Channel Back Projection Assertion (j < nimages)

Attempting to do histogram back-projection on a three-channel image results in the following error:
OpenCV Error: Assertion failed (j < nimages) in histPrepareImages, file ../modules/imgproc/src/histogram.cpp, line 148
The code which fails:
cv::Mat _refImage; //contains reference image of type CV_8UC3
cv::Mat output; //contains image data of type CV_8UC3
int histSize[] = {16, 16, 16};
int channels[] = {0, 1, 2};
const float hRange[] = {0.f, 256.f};
const float* ranges[] = {hRange, hRange, hRange};
int nChannels = 3;
cv::Mat hist;
cv::calcHist(&_refImage, 1, channels, cv::noArray(), hist, nChannels, histSize, ranges);
cv::calcBackProject(&output, 1, channels, hist, output, ranges); //This line causes assertion failure
Running nearly identical code on a single-channeled image works. According to the documentation, multi-channel images are also supported. Why won't this code work?
The short answer is that cv::calcBackProject() does not support in-place operation, although this is not mentioned in the documentation.
Explanation
Digging into the OpenCV source yields the following snippet:
void calcBackProject( const Mat* images, int nimages, const int* channels,
InputArray _hist, OutputArray _backProject,
const float** ranges, double scale, bool uniform )
{
//Some code...
_backProject.create( images[0].size(), images[0].depth() );
Mat backProject = _backProject.getMat();
assert(backProject.type() == CV_8UC1);
histPrepareImages( images, nimages, channels, backProject, dims, hist.size, ranges,
uniform, ptrs, deltas, imsize, uniranges );
//More code...
}
The line which causes the problem is:
_backProject.create( images[0].size(), images[0].depth() );
which, if the source and destination are the same, reallocates the input image data. images[0].depth() evaluates to CV_8U, which is numerically equivalent to the type specifier CV_8UC1. Thus, the data is created as a single-channel image.
This is a problem because histPrepareImages still expects the input image to have 3 channels, and the assertion is thrown.
Solution
Fortunately, the workaround is simple. The output parameter must be different from the input, like so:
cv::Mat result;
cv::calcBackProject(&output, 1, channels, hist, result, ranges);

convert cv::Mat to const CvMat* or CvMat*

I know only C language, so I am getting confusion/not understanding the syntax of the openCV data types particularly in cv::Mat, CvMat*, Mat.
My question is How can I convert cv::Mat to const CvMat* or CvMat*, and can any one provide documentation link for difference between CvMat *mat and cv::Mat and Mat in opencv2.4.
and How can I convert my int data to float data in CvMat ?
Thank you
cv::Mat has a operator CvMat() so simple assignment works:
cv::Mat mat = ....;
CvMat cvMat = mat;
This uses the same underlying data so you have to be careful that the cv::Mat doesn't go out of scope before the CvMat.
If you need to use the CvMat in an API that takes a CvMat*, then pass the address of the object:
functionTakingCmMatptr(&cvMat);
As for the difference between cv::Mat and Mat, they are the same. In OpenCV examples, it is often assumed (and I don't think this is a good idea) that using namespace cv is used.
To answer especially surya's second question:
TBH, the documentation on OpenCV is not the best.
Here the link to the newest type: cv::Mat. The newer types are more modern c++ like than c style.
Here is more OpenCV forum answer with a similar topic and here is an archived page.
Especially for the conversion problem (as juanchopanza mentioned):
cv::Mat mat = cv::Mat(10, 10, CV_32FC1); //CV_32FC1 equals float
//(reads 32bit floating-point 1 channel)
CvMat cvMat = mat;
or with
using namespace cv; //this should be in the beginning where you include
Mat mat = Mat(10, 10, CV_32FC1);
CvMat cvMat = mat;
Note: Usually you would probably work with CvMat* - but you should think about switching to the newer types completely. Example (taken from my second link):
CvMat* A = cvCreateMat(10, 10, CV_32F); //guess this works fine with no channels too
Changing int to float:
CvMat* A = cvCreateMat(10, 10, CV_16SC1);
//Feed A with data
CvMat* B = cvCreateMat(10, 10, CV_32FC1);
for( int i=0; i<10; ++i)
for( int i=0; i<10; ++i)
CV_MAT_ELEM(*A, float, i, j) = (float) cvmGet(B, i, j);
//Don't forget this unless you want to produce a memory leak.
cvReleaseMat(&A);
cvReleaseMat(&B);
The first two examples (without the pointer) are fine like that as the CvMat is held on the heap then. cvCreateMat(...) allocates memory you have to free on your own later. Another reason to use cv::Mat.