convertTo not working in opencv - c++

I am trying to use convertTo() in opencv C++. But an error pops saying
left of:convertTo must have class/struct/union
the program is below:
for (i = 0; i < height; i += 8)
{
for (j = 0; j < width; j += 8)
{
Mat block = dctImage(Rect(j, i, 8, 8));
vector<Mat> planes;
split(block, planes);
vector<Mat> outplanes(planes.size());
for (k = 0; k < planes.size(); k++) {
planes[k].convertTo(planes[k], CV_32FC1);
dct(planes[k], outplanes[k]);
outplanes[k].convertTo(outplanes[k], CV_8UC1);
}
}
}

I'm not sure if .convertTo() can handle the case of identical source and destination. You may want to try using a pair of temporary variables to get around your error message. Here is the relevant part from your example:
// ...
for (k = 0; k < planes.size(); k++) {
Mat planes_k, outplanes_k; // <-- Added temporaries.
planes[k].convertTo(planes_k, CV_32FC1);
dct(planes_k, outplanes_k);
outplanes_k.convertTo(outplanes[k], CV_8UC1);
}
// ...
UPDATE
According to the source code of .convertTo() my suggestion isn't really required (thanks pointing this out, #boaz001).

The Left:of statement suggests that its null or undefined
i.e.
outplanes[k] does not contain what you think it should.
I did a quick search and it looks like this could be a compiler issue whereby it has difficulty understanding/interpreting the line ...
vector<Mat> outplanes(planes.size());
https://en.wikipedia.org/wiki/Most_vexing_parse
What is outplanes? Is it being instantiated correctly?
The 'Most_vexing_parse' article suggests helping the compiler with extra parenthesis
vector<Mat> outplanes((planes.size()));
Kinda depends on your code and what the outplanes method/class is actually doing, but i imagine it's not returning what you think it should be.

Related

Opencv obatin certain pixel RGB value based on mask

My title may not be clear enough, but please look carefully on the following description.Thanks in advance.
I have a RGB image and a binary mask image:
Mat img = imread("test.jpg")
Mat mask = Mat::zeros(img.rows, img.cols, CV_8U);
Give some ones to the mask, assume the number of ones is N. Now the nonzero coordinates are known, based on these coordinates, we can surely obtain the corresponding pixel RGB value of the origin image.I know this can be accomplished by the following code:
Mat colors = Mat::zeros(N, 3, CV_8U);
int counter = 0;
for (int i = 0; i < mask.rows; i++)
{
for (int j = 0; j < mask.cols; j++)
{
if (mask.at<uchar>(i, j) == 1)
{
colors.at<uchar>(counter, 0) = img.at<Vec3b>(i, j)[0];
colors.at<uchar>(counter, 1) = img.at<Vec3b>(i, j)[1];
colors.at<uchar>(counter, 2) = img.at<Vec3b>(i, j)[2];
counter++;
}
}
}
And the coords will be as follows:
enter image description here
However, this two layer of for loop costs too much time. I was wondering if there is a faster method to obatin colors, hope you guys can understand what I was trying to convey.
PS:If I can use python, this can be done in only one sentence:
colors = img[mask == 1]
The .at() method is the slowest way to access Mat values in C++. Fastest is to use pointers, but best practice is an iterator. See the OpenCV tutorial on scanning images.
Just a note, even though Python's syntax is nice for something like this, it still has to loop through all of the elements at the end of the day---and since it has some overhead before this, it's de-facto slower than C++ loops with pointers. You necessarily need to loop through all the elements regardless of your library, you're doing comparisons with the mask for every element.
If you are flexible with using any other open source library using C++, try Armadillo. You can do all linear algebra operations with it and also, you can reduce above code to one line(similar to your Python code snippet).
Or
Try findNonZero()function and find all coordinates in image containing non-zero values. Check this: https://stackoverflow.com/a/19244484/7514664
Compile with optimization enabled, try profiling this version and tell us if it is faster:
vector<Vec3b> colors;
if (img.isContinuous() && mask.isContinuous()) {
auto pimg = img.ptr<Vec3b>();
for (auto pmask = mask.datastart; pmask < mask.dataend; ++pmask, ++pimg) {
if (*pmask)
colors.emplace_back(*pimg);
}
}
else {
for (int r = 0; r < img.rows; ++r) {
auto prowimg = img.ptr<Vec3b>(r);
auto prowmask = img.ptr(r);
for (int c = 0; c < img.cols; ++c) {
if (prowmask[c])
colors.emplace_back(prowimg[c]);
}
}
}
If you know the size of colors, reserve the space for it beforehand.

Manipulating pixels of a cv::MAT just doesn't take effect

The following code is just supposed to load an image, fill it with a constant value and save it again.
Of course that doesn't have a purpose yet, but still it just doesn't work.
I can read the pixel values in the loop, but all changes are without effect and saves the file as it was loaded.
Think I followed the "efficient way" here accurately: http://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html
int main()
{
Mat im = imread("C:\\folder\\input.jpg");
int channels = im.channels();
int pixels = im.cols * channels;
if (!im.isContinuous())
{ return 0; } // Just to show that I've thought of that. It never exits here.
uchar* f = im.ptr<uchar>(0);
for (int i = 0; i < pixels; i++)
{
f[i] = (uchar)100;
}
imwrite("C:\\folder\\output.jpg", im);
return 0;
}
Normal cv functions like cvtColor() are taking effect as expected.
Are the changes through the array happening on a buffer somehow?
Huge thanks in advance!
The problem is that you are not looking at all pixels in the image. Your code only looks at im.cols*im.channels() which is a relatively small number as compared to the size of the image (im.cols*im.rows*im.channels()). When used in the for loop using the pointer, it only sets a value for couple of rows in an image ( if you look closely you will notice the saved image will have these set ).
Below is the corrected code:
int main()
{
Mat im = imread("C:\\folder\\input.jpg");
int channels = im.channels();
int pixels = im.cols * im.rows * channels;
if (!im.isContinuous())
{ return 0; } // Just to show that I've thought of that. It never exits here.
uchar* f = im.ptr<uchar>(0);
for (int i = 0; i < pixels; i++)
{
f[i] = (uchar)100;
}
imwrite("C:\\folder\\output.jpg", im);
return 0;
}

OpenCV not recognizing Mat size

I'm trying to print an image using OpenCV defining a 400x400 Mat:
plot2 = cv::Mat(400,400, CV_8U, 255);
But when I try print the points, something strange happens. The y coordinate only prints to the first 100 values. That is, if I print the point (50,100), it does not print it in the 100/400th part of the columns, but at the end. Somehow, 400 columns have turned into 100.
For example, when running this:
for (int j = 0; j < 95; ++j){
plot2.at<int>(20, j) = 0;
}
cv::imshow("segunda pared", plot2);
Shows this (the underlined part is the part corresponding to the code above):
A line that goes to 95 almost occupies all of the 400 points when it should only occupy 95/400th of the screen.
What am I doing wrong?
When you defined your cv::Mat, you told clearly that it is from the type CV_8U:
plot2 = cv::Mat(400,400, CV_8U, 255);
But when you are trying to print it, you are telling that its type is int which is usually a signed 32 bit not unsigned 8 bit. So the solution is:
for (int j = 0; j < 95; ++j){
plot2.at<uchar>(20, j) = 0;
}
Important note: Be aware that OpenCV uses the standard C++ types not the fixed ones. So, there is no need to use fixed size types like uint16_t or similar. because when compiling OpenCV & your code on another platform both of them will change together.
BTW, one of the good way to iterate through your cv::Mat is:
for (size_t row = 0; j < my_mat.rows; ++row){
auto row_ptr=my_mat.ptr<uchar>(row);
for(size_t col=0;col<my_mat.cols;++col){
//do whatever you want with row_ptr[col] (read/write)
}
}

OpenCV function pointPolygonTest() does not act like what I thought

Please check my code, it does not work well. There is no errors occur during both build and debug session. I want to mark all the pixels with WHITE inside every contour. The contours are correct because I have drawn them separately. But the ultimate result is not right.
//Draw the Sketeches
Mat sketches(detected.size(), CV_8UC1, Scalar(0));
for (int j = ptop; j <= pbottom; ++j) {
for (int i = pleft; i <= pright; ++i) {
if (pointPolygonTest(contours[firstc], Point(j, i), false) >= 0) {
sketches.at<uchar> (i, j)= 255;
}
if (pointPolygonTest(contours[secondc], Point(j, i), false) >= 0) {
sketches.at<uchar> (i, j)= 255;
}
}
}
The variable "Mat detected" is another image used for hand detection. I have extracted two contours from it as contours[firstc] and contours[secondc]. And I also narrow down the hand part in the image to row(ptop:pbottom), and col(pleft,pright), and the two "for" loop goes correctly as well. So where exactly is the problem?.
Here is my result! Something goes wrong with it!

Assert thrown by OpenCV checkVector() on otherwise valid vector<vector<Point3f>>

Been chasing this bug all night, so please forgive any incoherence.
I'm attempting to use the OpenCV's calibrateCamera() to extract intrinsic and extrinsic parameters from a set of fifteen pictures whose object points and world points are given. From what I can tell from debugging, I'm grabbing valid points from the input files and placing them in a vector<Point3f>, which is itself placed into another vector.
I pass the whole shebang to calibrateCamera(),
double rms = calibrateCamera(worldPoints, pixelPoints, src.size(), intrinsic, distCoeffs, rvecs, tvecs);
which throws Assertion failed (ni >= 0) in unknown function, file ...\calibration.cpp, line 3173
Pulling up this file gives us
static void collectCalibrationData( InputArrayOfArrays objectPoints,
InputArrayOfArrays imagePoints1,
InputArrayOfArrays imagePoints2,
Mat& objPtMat, Mat& imgPtMat1, Mat* imgPtMat2,
Mat& npoints )
{
int nimages = (int)objectPoints.total();
int i, j = 0, ni = 0, total = 0;
CV_Assert(nimages > 0 && nimages == (int)imagePoints1.total() &&
(!imgPtMat2 || nimages == (int)imagePoints2.total()));
for( i = 0; i < nimages; i++ )
{
ni = objectPoints.getMat(i).checkVector(3, CV_32F);
CV_Assert( ni >= 0 );
total += ni;
}
...
So far as I know, a Point3f is of CV_32F depth, and I can see good data in the double vector just before calibrateCamera is called.
Any ideas what might be happening here? calibrateCamera() requires a vector<vector<Point3f>>, as said by http://aishack.in/tutorials/calibrating-undistorting-with-opencv-in-c-oh-yeah/ and the documentation; hopefully getMat(i) isn't failing due to that.
Could it possibly have been called on the vector<vector<Point2f>> of pixel points just after it? I have been over so many errors I am willing to believe anything.
Edit:
Consequently, checkVector()'s documentation was not really helpful
int cv::Mat::checkVector (int elemChannels, int depth = -1, bool RequireContinuous = true) const
returns N if the matrix is 1-channel (N x ptdim) or ptdim-channel (1 x N) or (N x 1); negative number otherwise
The problem is possibly in one of your InputArrayOfArrays arguments (in worldPoints precisely, if the assertion is thrown from the line pasted in your question). Mat:s should work just fine here.
I solved the same assertion error in my code by making all the 3 InputArrayOfArrays (or vector > and vector > in my case) same length vectors with fully populated entries. So my problem was in my architecture: my objectPoints vector was containing empty entries (even though the existing data was valid), and calibrate.cpp requires that no empty entries are present in any of the 3 InputArrayOfArrays. Btw I am using greyscale images for calibration so single channel data.
In calib3d source the most probable reason for throwing error is a null value if you have checked that data types match. You might try double-checking your valid input data:
1) count the # of valid calibration images from your chosen structure
validCalibImages = (int)goodCalibrationImages.size()
2) define worldPoints as vector<vector<Point3f> > worldPoints
3) IMPORTANT: resize to accommodate for data for each calibration entry
worldPoints.resize(validCalibImages)
4) populate with data e.g.
for(int k = 0; k < (int)goodCalibImages.size(); k++){
for(int i = 0; i < chessboardSize.height; i++){
for(int j = 0; j < chessboardSize.width; j++){
objectPoints[k].push_back(Point3f(i*squareSize, j*squareSize, 0));
}
}
}
'
Hope it helps!
I agree with FSaccilotto - call checkVector and make sure you are passing a vector of size n of Mat 1x1:{3 channel} and not vector of Mat 1 x n:{3 channel} or worse Mat 1 x n:{2 channel} which is what MatOfPoint spits out. That usually fixes 90% of assert failed issues. Explicitly declare the Mat yourself.
The object pattern is somewhat strange in that the x y z coords are in the channels not in the Mat dimensions.