I faced one time in an opencv code this expression:
Mat bimage = image >= sliderPos;
Known that sliderPos is an integer.
What does that mean please.
Thanks in advance
ADDITION: of course the type of image is cv::Mat
It is hard to tell without knowing the type of image, but according to the OpenCV documentation, I think this line converts image into a black and white image, using sliderPos as a threshold to determine which pixels will be black.
From the OpenCV documentation about matrices:
Comparison: A cmpop B, A cmpop alpha, alpha cmpop A, where cmpop is
one of : >, >=, ==, !=, <=, <. The result of comparison is an 8-bit
single channel mask whose elements are set to 255 (if the particular
element or pair of elements satisfy the condition) or 0.
The expression
Mat bimage = image >= sliderPos;
tests whether image is larger or equal to sliderPos (which usually yields a bool) and assigns the result of the test to the newly created variable bimage of type Mat.
If the >= operator is overloaded for (decltype(image), int), it might not yield a bool. If this is the case, look in the documentation of the type of image for details. In any case, it yields something, from wich a Mat can be constructed.
Related
Let's say we have this simple Mat
cv::Mat_<cv::Vec3f> image;
image.create(100,200);
image = cv::Vec3f(1.0, 0.0, 0.0);
My question would be why does this work properly by initializing the whole Mat?
If I had to guess, the following method is called (operator=):
https://docs.opencv.org/4.x/d3/d63/classcv_1_1Mat.html#aa5c947f7e449a4d856a4f3a87fcebd50
which only accepts a Scalar.
So I tried to search it out a bit more and I found out there could be the possibility of the Vec3f being data converted implicitly to a Scalar.
Also, I found this piece of code, which I am not entirely sure if it is contained within OpenCV:
static explicit operator Scalar (
Vec3b v
)
But: To the best of my knowledge, this only helps when you try to convert data types explicitly with something like:
Scalar s = (Scalar)point // where point is a cv::Vec3f
So what does actually happen in the initial initialization code? Does the Vec3f have to be data converted to the Scalar as I assumed? And if so, how exactly is it accomplished behind the scenes so that it actually works?
I am reading one paper and in that one line is there like
skin_map(row, col) = 1.0
Where skin_map is cv::Mat of opencv.Unable to understand meaning of above line.AnyBody help me to understand this?
cv::Mat has an operator() that receives row and col. This returns a reference to that position in the mat. The remainder of the line sets that position to 1.0.
Mat::operator()
Extracts a rectangular submatrix.
C++: Mat Mat::operator()(Range rowRange, Range colRange) const
From the documentation:
OpenCV C++ n-dimensional dense array class
(emphasis mine)
The Mat class have an overloaded function-call operator that returns a reference to a cell in the "n-dimensional array", where the arguments are the positions in each separate dimension.
The variable skin_map is apparently a two-dimensional Mat instance, a.k.a. a matrix, with rows and columns.
So what the assignment does is set one specific cell in the matrix to 1.0.
I am translating some matlab code to c++ using opencv. I want to get the values of a Matrix which satisfies a condition. I created a mask for this and when I apply it to the original Matrix I get the same size of the original matrix but with 0 values which are not there in the mask. But my question is how can I get only the values that are non-zero in the matrix and assign it to a different matrix.
My matlab code is:
for i= 1:size(no,1)
mask= labels==i;
op = orig(mask, :); //op only contains the values from the orig matrix which are in the mask. So orig size and op size is not the same
.....
end
The c++ translation that I have now is:
for (int i=0; i<n.rows; i++)
{
Mat mask;
compare(labels,i,mask,CMP_EQ);
Mat op;
orig.copyTo(op,mask); //Here the orig size and the op size is always same but values which are not in the mask are 0
}
So, how can I create a matrix which only has the values that the mask satisfies???
You might try to make use of cv::SparseMat (http://docs.opencv.org/modules/core/doc/basic_structures.html#sparsemat), which only keeps non-zero values in a hash.
When you assign a regular cv::Mat to a cv::SparseMat, it automatically captures the non-zero values. From that point, you can iterate through the non-zero values and manipulate them as you'd like.
Hope I got question correctly and it helps!
OpenCv does support Matrix Expresions like A > B or A <= Band so on.
This is stated in the Documentation off cv::Mat
If you're simply wanting to store values, the Mat object is probably not the best way, since it has been made for the purpose of containing images.
In that case, use an std::vector object instead of the cv::Mat object, and you can use the .push_back handle whenever you find an element that is non-zero, which will dynamically resize the vector.
If you're trying to create a new image, then you have to be specific about what kind of image you want to see, because if you don't know how many nonzero elements there are, how can you set the width and height? Also you might end up with an odd number of elements.
I'm having some processing i.e. some matrix operations using Mat in OpenCV and the following line gives error:
matC = matA*matB;
This time, the error is not that difficult to identify since I've already known that the matrix dimensions are correct and their number types are either CV_64FC1 or CV_32FC1.
So, I can fix this error by identifying each matrix's number type (perhaps with some if-else) and replace with a temporary matrix (Mat) that is type compatible
However, in terms of the implementation of OpenCV Mat multiplication, is it a good one? Do we really have to check the operands' number type (and even number of channel e.g. CV_64FC3, CV_64FC2) every time a matrix operation is performed?
is this type of checking barely avoided? since matA or matB are probably returned from a function call such as solvePnP(..., matA, matB,...) and number type of matA, matB are undefined
Thanks,
PS: I've stuck with OpenCV matrix operation several times, problems with number type, number of channels...
Edit 01:
I'm sorry for not being clear in my question! but I'm doing my best to make myself clear
my questions are:
1) should I and how would I do to ensure matC = matA*matB is a error-free operation?
2) in case there should be some checking before the operation, then, should I do that every time such matrix operation is performed? is there a better choice to not replicate such checking?
Edit 02:
here is what I currently do to perform the checking:
tmp1 = Vect32Homo(Mat(objPoints.at<Vec3f>(i)));
if (s1To.depth() != tmp1.depth())
{
printf("Different Number Type!");
s1To.convertTo(s1To, CV_64FC1);
tmp1.convertTo(tmp1, CV_64FC1);
} else
{
printf("Same Number Type!");
}
tmp = s1To*tmp1; // error prone operation
As you may see that this attempt will not work if the two matrices have different number of channel
How to access individual pixels in OpenCV 2.3 using C++?
For my U8C3 image I tried this:
Scalar col = I.at<Scalar>(i, j);
and
p = I.ptr<uchar>(i);
First is throwing an exception, the second one is returning some unrelated data. Also all examples I was able to find are for old IIPimage(?) for C version of OpenCV.
All I need is to get color of pixel at given coordinates.
The type you call cv::Mat::at with needs to match the type of the individual pixels. Since cv::Scalar is basically a cv::Vec<double,4>, this won't work for a U8C3 image (it would work for a F64C4 image, of course).
In your case you need a cv::Vec3b, which is a typedef for cv::Vec<uchar,3>:
Vec3b col = I.at<Vec3b>(i, j);
You can then convert this into a cv::Scalar if you really need to, but the type of the cv::Mat::at instantiation must match the type of your image, since it just casts the image data without any conversions.
Your second code snippet returns a pointer to the ith row of the image. It is no unrelated data, but just a pointer to single uchar values. So in case of a U8C3 image, every consecutive 3 elements in the data returned to p should represent one pixel. Again, to get every pixel as a single element use
Vec3b *p = I.ptr<Vec3b>(i);
which again does nothing more than an appropriate cast of the row pointer before returning it.
EDIT: If you want to do many pixel accesses on the image, you can also use the cv::Mat_ convenience type. This is nothing more than a typed thin wrapper around the image data, so that all accesses to image pixels are appropriately typed:
Mat_<Vec3b> &U = reinterpret_cast<Mat_<Vec3b>&>(I);
You can then freely use U(i, j) and always get a 3-tuple of unsigned chars and therefore pixels, again without any copying, just type casts (and therefore at the same performance as I.at<Vec3b>(i, j)).