wrong parameters for function remap - c++

I have the following opencv code:
InputArray fish =InputArray(ra);
InputArray base = InputArray(ra);
Mat warp(Mat input) {
if (!f) {
fish = genfish(input);
f = true
}
if (!b) {
base = bladArray;
b = true
}
Mat ret;
InputArray i = InputArray(input);
OutputArray p;
remap(i, p, fish, bladArray, 0, BORDER_CONSTANT, Scalar());
}
The idea behind this code is applying the fish eye correction efficiently on the image (that's why the filter is buffered). The problem is that appently I'm passing the wrong arguments to the remap function. According to the documentation (page 273) we have the following signature:
void remap ( InputArray src , OutputArray dst , InputArray map1, InputArray map2 , int interpolation , int borderMode =BORDER_CONSTANT, const Scalar& borderValue =Scalar() )
now
InputArray src = i
OutputArray dst= p
InputArray map1=fish
InputArray map2=bladarray
int interpolation=0
int bordermode=BORDER_CONSTANT
Scaler borderValue=scaler();
which all seems to fit the signature yet it keeps telling me I have the wrong signature. What am I doing wrong?

Related

Is there a Mat arithmetic + operator alternative for UMat?

I'am trying to applying homomorphic filter to my video player program.
While I was writing code using UMat, I found something incompatible with the code using the existing Mat.
in Mat code
cv::Mat temp;
someImage.convertTo(temp,CV_32FC1)
temp = temp + 0.01
temp = temp + 0.01
What does this mean?
And how can I using this option in UMat ?
OpenCV's operator+(const Mat& a, const Scalar& s) adds a scalar value to each element of the matrix. It's practically the same as calling void add(InputArray src1, InputArray src2, OutputArray dst, InputArray mask=noArray(), int dtype=-1).
InputArray interface accepts Mats and UMats as well as Scalars, so you can just call
cv::UMat temp(3, 3, CV_32FC1, cv::Scalar(0));
cv::add(temp, 0.01, temp);

Writing template functions accepting inputs of type cv::Mat or cv::UMat

What is the simplest and cleanest way to write a function accepting input arguments of type Mat or UMat?
Should I use InputArray, use templates or is there a better alternative? I am currenlty having functions with identical implementation written for both Mat and UMat.
The function should take full advantage of the UMat abstraction over OpenCL, and running roughly as fast as if it was written just for UMats, and without the overhead of having to copy UMats to Mats.
An example of function which I could want to define for both Mat and UMat is the following (please do not propose refactoring to remove local Mat/UMat variables, this is just an example)
using namespace cv;
void foo(const Mat & in1, const Mat & in2, Mat & out)
{
Mat aux1;
Mat aux2;
exp(in1, aux1);
exp(in2, aux2);
add(aux1, aux2, out);
return;
}
void foo(const UMat & in1, const UMat & in2, UMat & out)
{
UMat aux1;
UMat aux2;
exp(in1, aux1);
exp(in2, aux2);
add(aux1, aux2, out);
return;
}
Suggested by #Miki and #Gianni, we can choose InpputArray and OutputArray for the general template types of Mat, Mat_<T>, Matx<T, m, n>, std::vector<T>, std::vector<std::vector<T> >, std::vector<Mat>, std::vector<Mat_<T> >, UMat, std::vector<UMat> or double.
void func(InputArray _src1, InputArray _src2, OutputArray _dst)
{
Mat src1 = _src1.getMat(), src2 = _src2.getMat();
CV_Assert( src1.type() == src2.type() && src1.size() == src2.size());
Mat aux1 = Mat(src1.size(), src1.type());
Mat aux2 = Mat(src1.size(), src1.type());
cv::exp(src1, aux1);
cv::exp(src2, aux2);
_dst.create(src1.size(), src1.type());
Mat dst = _dst.getMat();
cv::add(aux1, aux2, dst);
}
Now you can test it with Mat or UMat or even std::vector
int test(){
std::vector<float> vec1 = {1,2,3};
std::vector<float> vec2 = {3,2,1};
std::vector<float> dst;
func(vec1, vec2, dst);
// now dst is [22.8038, 14.7781, 22.8038]
}
I used a template:
using namespace cv;
// MatType is either cv::Mat or cv::UMat.
template<typename MatType>
void foo(const MatType& in1, const MatType& in2, MatType& out)
{
MatType aux1;
MatType aux2;
exp(in1, aux1);
exp(in2, aux2);
add(aux1, aux2, out);
return;
}
Advantages:
It is easy to declare local Mats or UMats depending on the input type.
No Mat ⟷ UMat conversion.
Disadvantage:
All matrix arguments must be of the same type, it is not possible to mix Mats and UMats.

Why fixedSize error occurs depends on default parameter?

I was trying to find a way how to make default parameter of Mat type, but it was complicated to make. But, today I found out this code OutputArray _hist = Mat() and I thought this could be simply a default parameter of Mat type. Therefore, I could make this code and it worked well, but I couldn't still understand one thing.
int myGetHistogram(InputArray _src, OutputArray _hist = Mat())
{
Mat src = _src.getMat();
_hist.create(512,512,CV_8U);
Mat histImage = _hist.getMat();
...
rectangle(histImage, max_pt1, max_pt2, Scalar(0), -1);
return max_pt1.x/(histImage.cols/256);
}
In this code, this error message shows up.
OpenCV Error: Assertion failed (!fixedSize() || ((Mat)obj)->size.operator()() == _sz)*
If I set the default parameter as OutputArray _hist = Mat(512,512,CV_8U) instead of OutputArray _hist = Mat(), then the error is removed.
Why does this problem happen?
In either case you're creating an OutputArray from a temporary Mat (i.e. this constructor), so you won't be able to change the size or datatype.
Take inspiration from the OpenCV code. Use cv::noArray() to make the output parameter optional, and then cv::OutputArray::needed to determine how to initialize your cv::Mat histImage.
#include <opencv2/opencv.hpp>
int myGetHistogram(cv::InputArray _src, cv::OutputArray _hist = cv::noArray())
{
cv::Mat src = _src.getMat();
cv::Size const HISTOGRAM_SIZE(512, 512);
cv::Mat histImage;
if (_hist.needed()) {
_hist.create(HISTOGRAM_SIZE, CV_8U);
histImage = _hist.getMat();
} else {
histImage = cv::Mat(HISTOGRAM_SIZE, CV_8UC1);
}
// ... whatever
return 1;
}
int main()
{
cv::Mat a(4, 4, CV_8UC1);
cv::Mat b;
myGetHistogram(a);
myGetHistogram(a, b);
return 0;
}

OpenCV Applying Affine transform to single points rather than entire image

I've got a Affine transform matrix in OpenCV from the KeypointBasedMotionEstimator class.
It comes in a form like:
[1.0008478, -0.0017408683, -10.667297;
0.0011812132, 1.0009096, -3.3626099;
0, 0, 1]
I would now like to apply the transform to a vector< Pointf >, so that it will transform each point as it would be if they were in the image.
The OpenCV does not seem to allow transforming points only, the function:
void cv::warpAffine ( InputArray src,
OutputArray dst,
InputArray M,
Size dsize,
int flags = INTER_LINEAR,
int borderMode = BORDER_CONSTANT,
const Scalar & borderValue = Scalar()
)
Only seems to take images as inputs and outputs.
Is there a way I can apply an affine transform to single points in OpenCV?
you can use
void cv::perspectiveTransform(InputArray src, OutputArray dst, InputArray m)
e.g.
cv::Mat yourAffineMatrix(3,3,CV_64FC1);
[...] // fill your transformation matrix
std::vector<cv::Point2f> yourPoints;
yourPoints.push_back(cv::Point2f(4,4));
yourPoints.push_back(cv::Point2f(0,0));
std::vector<cv::Point2f> transformedPoints;
cv::perspectiveTransform(yourPoints, transformedPoints, yourAffineMatrix);
not sure about Point datatype, but the transformation must have double type, e.g. CV_64FC1
see http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#perspectivetransform too
it's a bit clumsy, but you can matrix-multiply your points manually:
// the transformation matrix
Mat_<float> M(3,3);
M << 1.0008478, -0.0017408683, -10.667297,
0.0011812132, 1.0009096, -3.3626099,
0, 0, 1;
// a point
Point2f p(4,4);
// make a Mat for multiplication,
// must have same type as transformation mat !
Mat_<float> pm(3,1);
pm << p.x,p.y,1.0;
// now , just multiply:
Mat_<float> pr = M * pm;
// retrieve point:
Point2f pt(pr(0), pr(1));
cerr << pt << endl;
[-6.67087, 0.645753]

c++ function call by value not work

I have a problem with this code:
The problem is when I see the image original, is modified by "borrarFondo()" but this function is called from "segmentarHoja" and here entry img by value, but img modifies.
void borrarFondo(Mat& img){
img = ~img;
Mat background;
medianBlur(img, background, 45);
GaussianBlur(background, background, Size(203,203),101,101);
img = img - background;
img = ~img;
}
void segmentarHoja(Mat img, Mat& imsheet){
Mat imgbw;
borrarFondo(img); //borrarFondo is called from here where img is a copy
cvtColor(img, imgbw, CV_BGR2GRAY);
threshold(imgbw, imgbw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Mat element = getStructuringElement(MORPH_ELLIPSE, Size(21,21));
erode(imgbw, imgbw, element);
vector<vector<Point> > contoursSheet;
findContours(imgbw, contoursSheet, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
vector<Rect> boundSheet(contoursSheet.size());
int largest_area=0;
for( int i = 0; i< contoursSheet.size(); i++ )
{
double a= contourArea( contoursSheet[i],false);
if(a>largest_area){
largest_area=a;
boundSheet[i] = boundingRect(contoursSheet[i]);
imsheet=img(boundSheet[i]).clone();
}
}
borrarFondo(imsheet);
}
int main()
{
Mat imsheet;
image= imread("c:/imagen.jpg");
segmentarHoja(image, imsheet);
imshow("imsheet",imsheet);
imshow("imagen",image); //original image by amending borrarFondo
waitKey(0);
}
I don't want to change original image
opencv Mat is a counted reference (i.e. like std::shared_ptr, except different syntax) where copy construction or assignment does not copy. use the clone method to copy. read the documentation, always a good idea.
if you're doing something like this:
Mat a;
Mat b = a;
or like this:
void func(Mat m) {...}
or :
vector<Mat> vm;
vm.push_back(m);
all of it is a shallow copy. the Mat header will be a copy, the pointers inside, too.
so, e.g. in the 1st example, b and a share the same size and data members
this might explain, why passing a Mat by value still results in pixels manipulated from the 'shallow' copy.
to avoid that you will have to do a 'deep' copy instead:
Mat c = a.clone(); // c has its own pixels now.
and again, if you don't want your Mat to be manipulated, pass it as a const Mat & be very careful about how you use it, as illustrated below.
#include <opencv2/opencv.hpp>
void foo( cv::Mat const& image )
{
cv::Mat result = image;
cv::ellipse(
result, // img
cv::Point( 300, 300 ), // center
cv::Size( 50, 50 ), // axes (bounding box size)
0.0, // angle
0.0, // startAngle
360.0, // endAngle
cv::Scalar_<int>( 0, 0, 255 ), // color
6 // thickness
);
}
auto main() -> int
{
auto window_name = "Display";
cv::Mat lenna = cv::imread( "lenna.png" );
foo( lenna );
imshow( window_name, lenna );
cv::waitKey( 0 );
}
The Mat const& lied about mutability, and Lenna’s nose is correspondingly long, here marked by a big fat circle placed by the foo function above: