I defined and initialized a Mat variable using the Mat::zeros, when I print its shape, i.e. rows, cols, channels, it seems I get wrong values.
My code is shown as follows:
#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char const *argv[])
{
int n_Channel = 3;
int mySizes[3] = {100, 200, n_Channel};
Mat M = Mat::zeros(n_Channel, mySizes, CV_64F);
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
return 0;
}
The printed message is :
-1,-1,1
What's wrong with this?
I also find that if I declare a Mat using the following code:
int n_Channel = 3;
Mat M(Size(100, 200), CV_32FC(n_Channel));
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
the outcome is correct:
200,100,3
I'm confused about this. Thank you all for helping me!
You want to use a very special overloaded version of the cv::Mat::zeros method.
Let's have a look at the following code:
// Number of channels.
const int n_Channel = 3;
// Number of dimensions; must be 1 or 2?
const int n_Dimensions = 2;
// Create empty Mat using zeros, and output dimensions.
int mySizes[n_Dimensions] = { 200, 100 };
cv::Mat M1 = cv::Mat::zeros(n_Dimensions, mySizes, CV_64FC(n_Channel));
std::cout << "M1: " << M1.rows << "," << M1.cols << "," << M1.channels() << std::endl;
// Create empty Mat using constructor, and output dimensions.
cv::Mat M2 = cv::Mat(cv::Size(100, 200), CV_64FC(n_Channel), cv::Scalar(0, 0, 0));
std::cout << "M2: " << M2.rows << "," << M2.cols << "," << M2.channels() << std::endl;
which gives the following output:
M1: 200,100,3
M2: 200,100,3
So, basically you have to move the "channel number info" from mySizes to the cv::Mat::zeros method. Also, you have to pay attention to the order of the image dimensions provided in mySizes, since it seem to differ from the constructor using cv::Size. I guess the latter one is width x height, whereas the first one is number of rows x number of cols.
How to init CV mat :
cv::Mat test = cv::Mat::zeros(cv::Size(100, 200), CV_64F);
As you can see, the first parameter is the Size cf :
https://docs.opencv.org/3.1.0/d3/d63/classcv_1_1Mat.html
Related
I cannot understand why the following minimal code outputs segmentation fault and cv::Mat values are not printed correctly:
#include <opencv2/opencv.hpp>
int main()
{
unsigned char out[1280*720*3/2] = {100};
cv::Mat dummy_query = cv::Mat(1, 1280*720*3/2*sizeof(unsigned char), CV_8UC1, (void *)out);
cv::Size s = dummy_query.size();
std::cout << s << "\r\n";
for(int i = 0; i < 1280*720*3/2; i++)
{
std::cout << i << "ss" << int(out[i]) << ":";
std::cout << dummy_query.at<int>(0,i) << " ";
}
}
You have defined uchar datatype in cv::Mat but accessing at this line
as int std::cout << dummy_query.at<int>(0,i) << " ";
so your program will likely get crash at end of the loop
e.g.
// create a 100x100 8-bit matrix
Mat M(100,100,CV_8U);
// this will be compiled fine. no any data conversion will be done.
Mat_<float>& M1 = (Mat_<float>&)M;
// the program is likely to crash at the statement below
M1(99,99) = 1.f;
check this open cv reference
I want to use boost::geometry::union_ with polygons on picture, but it cannot see point E (says, that polygon ABCDE has area as ABCD). How can I fix it?
Here's how I'd write your exercise. I designed a similar scene (the points are probably named differently, but that's not the [sic] point):
Note: I've rounded the coordinates to the nearest integral number (and scaled x10 afterwards)
Live On Wandbox
#include <boost/geometry.hpp>
#include <boost/geometry/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <iostream>
#include <fstream>
namespace bg = boost::geometry;
namespace bgm = bg::model;
using Point = bgm::d2::point_xy<int>;
using Polygon = bgm::polygon<Point>;
using Multi = bgm::multi_polygon<Polygon>;
int main() {
auto prepare = [](auto name, auto& geo) {
std::string r;
if (!bg::is_valid(geo, r)) {
std::cout << "Correcting " << name << ": " << bg::wkt(geo) << " (" << r << ")\n";
bg::correct(geo);
}
std::cout << name << ": " << bg::wkt(geo) << "\n";
};
Point A{-120, 50}, B{-60, 70}, C{-70, 50}, D{-40, 00}, E{-130, 10};
Point F{-20, -20}, G{40, -10}, H{80, 60}, I{50, 80}, J{-20, 50}, K{30, 30};
Polygon ABCDE({{A, B, C, D, E, A}});
Polygon DFGHIJK({{D, F, G, H, I, J, K, D}});
prepare("ABCDE", ABCDE);
prepare("DFGHIJK", DFGHIJK);
Multi out;
bg::union_(ABCDE, DFGHIJK, out);
prepare("union", out);
{
namespace bs = bg::strategy::buffer;
const double buffer_distance = 3;
bs::distance_symmetric<double> distance(buffer_distance);
bs::join_miter join;
bs::end_flat end;
bs::point_circle circle(6);
bs::side_straight side;
Multi outline;
bg::buffer(out, outline, distance, side, join, end, circle);
std::ofstream svg("output.svg");
boost::geometry::svg_mapper<Point> mapper(svg, 400, 400);
mapper.add(ABCDE);
mapper.add(DFGHIJK);
mapper.add(outline);
mapper.map(ABCDE,
"fill-opacity:0.3;fill:rgb(153,0,0);stroke:rgb(153,0,0);"
"stroke-width:1;stroke-dasharray:1 2");
mapper.map(DFGHIJK,
"fill-opacity:0.3;fill:rgb(0,0,153);stroke:rgb(0,0,153);"
"stroke-width:1;stroke-dasharray:1 2 2 1");
mapper.map(outline,
"fill-opacity:0.1;fill:rgb(153,204,0);stroke:rgb(153,204,0);"
"stroke-width:1");
}
std::cout << "Areas: " << bg::area(ABCDE) << " + " << bg::area(DFGHIJK)
<< " = " << bg::area(out) << "\n";
}
Which prints the expected:
ABCDE: POLYGON((-120 50,-60 70,-70 50,-40 0,-130 10,-120 50))
Correcting DFGHIJK: POLYGON((-40 0,-20 -20,40 -10,80 60,50 80,-20 50,30 30,-40 0)) (Geometry has wrong orie
ntation)
DFGHIJK: POLYGON((-40 0,30 30,-20 50,50 80,80 60,40 -10,-20 -20,-40 0))
union: MULTIPOLYGON(((-40 0,-130 10,-120 50,-60 70,-70 50,-40 0)),((-40 0,30 30,-20 50,50 80,80 60,40 -10,-
20 -20,-40 0)))
Areas: 3600 + 5800 = 9400
It also generates the following SVG:
I hope the way in which I approach it, and specifically the diagnostics code is helpful to you.
I am a beginner to ITK and c++. I have the following code where I can get the height and width of an image. Instead of giving the input image in the console, I want to do it in the code itself. How do I directly give the input image to this code?
#include "itkImage.h"
#include "itkImageFileReader.h"
int main()
{
mat m("filename");
imshow("windowname", m);
}
// verify command line arguments
if( argc < 2 )
{
std::cout << "usage: " << std::endl;
std::cerr << argv[0] << " inputimagefile" << std::endl;
return exit_failure;
}
typedef itk::image<float, 2> imagetype;
typedef itk::imagefilereader<imagetype> readertype;
readertype::pointer reader = readertype::new();
reader->setfilename( argv[1] );
reader->update();
std::cout << reader->getoutput()->getlargestpossibleregion().getsize()[0] << " "
<< reader->getoutput()->getlargestpossibleregion().getsize()[1] << std::endl;
// an example image had w = 200 and h = 100 (it is wider than it is tall). the above output
// 200 100
// so w = getsize()[0]
// and h = getsize()[1]
// a pixel inside the region
itk::index<2> indexinside;
indexinside[0] = 150;
indexinside[1] = 50;
std::cout << reader->getoutput()-
>getlargestpossibleregion().isinside(indexinside) << std::endl;
// a pixel outside the region
itk::index<2> indexoutside;
indexoutside[0] = 50;
indexoutside[1] = 150;
std::cout << reader->getoutput()- >getlargestpossibleregion().isinside(indexoutside) << std::endl;
// this means that the [0] component of the index is referencing the left to right (column) index
// and the [1] component of index is referencing the top to bottom (row) index
return exit_success;
}
Change the line reader->setfilename( argv[1] ); by reader->setfilename( "C:/path/to/file.png" );
I assume that
mat m("filename");
imshow("windowname", m);
sneaked in from some unrelated code? Otherwise the example would not compile.
I am trying to code a program that eliminates some of the connected components and keep the rest.
However, at some point in the code, the program exits with error message "Segmentation fault (core dumped)".
I have narrowed down the error to the statement: "destinationImage.at(row, column) = labeledImage.at(row, column);" using the checkpoints you'll find the code below.
I have tried all the solution I found, especially this one, with no luck.
Please Help!
One more thing, the program reads the image correctly but does not show the original image as per the code. Instead, it prints a message "init done
opengl support available". Is this normal? Does the implementation of the imshow take place at the end of the program with no errors?
/* Goal is to find all related components, eliminate secondary objects*/
#include <opencv2/core/utility.hpp>
#include "opencv2/imgproc.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
//Declaring variables
Mat originalImage;
int conComponentsCount;
int primaryComponents;
//Declaring constants
const char* keys =
{
"{#image|../data/sample.jpg|image for converting to a grayscale}"
};
//Functions prototypes, used to be able to define functions AFTER the "main" function
Mat BinarizeImage (Mat &, int thresh);
int AverageCCArea(Mat & CCLabelsStats,int numOfLabels, int minCCSize);
bool ComponentIsIncludedCheck (int ccArea, int referenceCCArea);
//Program mainstream============================================
int main (int argc, const char **argv)
{
//Waiting for user to enter the required path, default path is defined in "keys" string
CommandLineParser parser(argc, argv, keys);
string inputImage = parser.get<string>(0);
//Reading original image
//NOTE: the program MUST terminate or loop back if the image was not loaded; functions below use reference to matrices and references CANNOT be null or empty.
originalImage = imread(inputImage.c_str(), IMREAD_GRAYSCALE);// or: imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE)
cout << " 1) Loading image done!" << endl;//CHECKPOINT
if (originalImage.empty())
{
cout << "Nothing was loaded!";
return -1; //terminating program with error feedback
}
cout << " 2) Checking for null Image done!" << endl;//CHECKPOINT
namedWindow("Original Image", 0);
imshow("Original Image", originalImage);
cout << " 3) Showing ORIGINAL image done!" << endl;//CHECKPOINT
//Image Binarization; connectedcomponents function only accepts binary images.
int threshold=100; //Value chosen empirically.
Mat binImg = BinarizeImage(originalImage, threshold);
cout << " 4) Binarizing image done!" << endl;//CHECKPOINT
//Finding the number of connected components and generating the labeled image.
Mat labeledImage; //Image with connected components labeled.
Mat stats, centroids; //Statistics of connected image's components.
conComponentsCount = connectedComponentsWithStats(binImg, labeledImage, stats, centroids, 4, CV_16U);
cout << " 5) Connecting pixels done!" << endl;//CHECKPOINT
//Creating a new matrix to include the final image (without secondary objects)
Mat destinationImage(labeledImage.size(), CV_16U);
//Calculating the average of the labeled image components areas
int ccSizeIncluded = 1000;
int avgComponentArea = AverageCCArea(stats, conComponentsCount, ccSizeIncluded);
cout << " 6) Calculating components avg area done!" << endl;//CHECKPOINT
//Criteria for component sizes
for (int row = 0; row <= labeledImage.rows; row++)
{
cout << " 6a) Starting rows loop iteration # " << row+1 << " done!" << endl;//CHECKPOINT
for (int column = 0; column <= labeledImage.cols; column++)
{
//Criteria for component sizes
int labelValue = labeledImage.at<int>(row, column);
if (ComponentIsIncludedCheck (stats.at<int>(labelValue, CC_STAT_AREA), avgComponentArea))
{
//Setting pixel value to the "destinationImage"
destinationImage.at<int>(row, column) = labeledImage.at<int>(row, column);
cout << " 6b) Setting pixel (" << row << "," << column << ") done!" << endl;//CHECKPOINT
}
else
cout << " 6c) Pixel (" << row << "," << column << ") Skipped!" << endl;//CHECKPOINT
}
cout << " 6d) Row " << row << " done!" << endl;//CHECKPOINT
}
cout << " 7) Showing FINAL image done!" << endl;//CHECKPOINT
namedWindow("Final Image", 0);
imshow("Final Image", destinationImage);
cout << " 8) Program done!" << endl;//CHECKPOINT
waitKey (0);
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
Mat BinarizeImage (Mat & originalImg, int threshold=100) //default value of threshold of grey content.
{
// Binarization of image to be used in connectedcomponents function.
Mat bw = threshold < 128 ? (originalImg < threshold) : (originalImg > threshold);
return bw;
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
int AverageCCArea(Mat & CCLabelsStats,int numOfLabels, int minCCSize) //calculates the average area of connected components without components smaller than minCCSize pixels..... reference is used to improve performance, passing-by-reference does not require copying the matrix to this function.
{
int average;
for (int i=1; i<=numOfLabels; i++)
{
int sum = 0;
int validComponentsCount = numOfLabels - 1;
if (CCLabelsStats.at<int>(i, CC_STAT_AREA) >= minCCSize)
{
sum += CCLabelsStats.at<int>(i, CC_STAT_AREA);
}
else
{
validComponentsCount--;
}
average = sum / (validComponentsCount);
}
return average;
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++
bool ComponentIsIncludedCheck (int ccArea, int referenceCCArea)
{
if (ccArea >= referenceCCArea)
{
return true; //Component should be included in the destination image
}
else
{
return false; //Component should NOT be included in the destination image
}
}
change this:
for (int row = 0; row <= labeledImage.rows; row++)
to this:
for (int row = 0; row < labeledImage.rows; row++)
and this:
for (int column = 0; column <= labeledImage.cols; column++)
to this:
for (int column = 0; column < labeledImage.cols; column++)
any good?
(remember that in C++ we start counting from 0, so if e.g. labeledImage.cols == 10, the last column is the one with the index 9)
I'm trying to build the sample program brief_match_test.cpp that comes with OpenCV, but I keep getting this error from the cv::findHomography() function when I run the program:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.3/modules/core/src/matrix.cpp, line 1421
libc++abi.dylib: terminate called throwing an exception
findHomography ... Abort trap: 6
I'm compiling it like this:
g++ `pkg-config --cflags opencv` `pkg-config --libs opencv` brief_match_test.cpp -o brief_match_test
I've added some stuff to the program to show the keypoints that the FAST algorithm finds, but haven't touched the section dealing with homography. I'll include my modified example just in case I did screw something up:
/*
* matching_test.cpp
*
* Created on: Oct 17, 2010
* Author: ethan
*/
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;
//Copy (x,y) location of descriptor matches found from KeyPoint data structures into Point2f vectors
static void matches2points(const vector<DMatch>& matches, const vector<KeyPoint>& kpts_train,
const vector<KeyPoint>& kpts_query, vector<Point2f>& pts_train, vector<Point2f>& pts_query)
{
pts_train.clear();
pts_query.clear();
pts_train.reserve(matches.size());
pts_query.reserve(matches.size());
for (size_t i = 0; i < matches.size(); i++)
{
const DMatch& match = matches[i];
pts_query.push_back(kpts_query[match.queryIdx].pt);
pts_train.push_back(kpts_train[match.trainIdx].pt);
}
}
static double match(const vector<KeyPoint>& /*kpts_train*/, const vector<KeyPoint>& /*kpts_query*/, DescriptorMatcher& matcher,
const Mat& train, const Mat& query, vector<DMatch>& matches)
{
double t = (double)getTickCount();
matcher.match(query, train, matches); //Using features2d
return ((double)getTickCount() - t) / getTickFrequency();
}
static void help()
{
cout << "This program shows how to use BRIEF descriptor to match points in features2d" << endl <<
"It takes in two images, finds keypoints and matches them displaying matches and final homography warped results" << endl <<
"Usage: " << endl <<
"image1 image2 " << endl <<
"Example: " << endl <<
"box.png box_in_scene.png " << endl;
}
const char* keys =
{
"{1| |box.png |the first image}"
"{2| |box_in_scene.png|the second image}"
};
int main(int argc, const char ** argv)
{
Mat outimg;
help();
CommandLineParser parser(argc, argv, keys);
string im1_name = parser.get<string>("1");
string im2_name = parser.get<string>("2");
Mat im1 = imread(im1_name, CV_LOAD_IMAGE_GRAYSCALE);
Mat im2 = imread(im2_name, CV_LOAD_IMAGE_GRAYSCALE);
if (im1.empty() || im2.empty())
{
cout << "could not open one of the images..." << endl;
cout << "the cmd parameters have next current value: " << endl;
parser.printParams();
return 1;
}
double t = (double)getTickCount();
FastFeatureDetector detector(15);
BriefDescriptorExtractor extractor(32); //this is really 32 x 8 matches since they are binary matches packed into bytes
vector<KeyPoint> kpts_1, kpts_2;
detector.detect(im1, kpts_1);
detector.detect(im2, kpts_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "found " << kpts_1.size() << " keypoints in " << im1_name << endl << "fount " << kpts_2.size()
<< " keypoints in " << im2_name << endl << "took " << t << " seconds." << endl;
drawKeypoints(im1, kpts_1, outimg, 200);
imshow("Keypoints - Image1", outimg);
drawKeypoints(im2, kpts_2, outimg, 200);
imshow("Keypoints - Image2", outimg);
Mat desc_1, desc_2;
cout << "computing descriptors..." << endl;
t = (double)getTickCount();
extractor.compute(im1, kpts_1, desc_1);
extractor.compute(im2, kpts_2, desc_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "done computing descriptors... took " << t << " seconds" << endl;
//Do matching using features2d
cout << "matching with BruteForceMatcher<Hamming>" << endl;
BFMatcher matcher_popcount(NORM_HAMMING);
vector<DMatch> matches_popcount;
double pop_time = match(kpts_1, kpts_2, matcher_popcount, desc_1, desc_2, matches_popcount);
cout << "done BruteForceMatcher<Hamming> matching. took " << pop_time << " seconds" << endl;
vector<Point2f> mpts_1, mpts_2;
cout << "matches2points ... ";
matches2points(matches_popcount, kpts_1, kpts_2, mpts_1, mpts_2); //Extract a list of the (x,y) location of the matches
cout << "done" << endl;
vector<char> outlier_mask;
cout << "findHomography ... ";
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier_mask);
cout << "done" << endl;
cout << "drawMatches ... ";
drawMatches(im2, kpts_2, im1, kpts_1, matches_popcount, outimg, Scalar::all(-1), Scalar::all(-1), outlier_mask);
cout << "done" << endl;
imshow("matches - popcount - outliers removed", outimg);
Mat warped;
Mat diff;
warpPerspective(im2, warped, H, im1.size());
imshow("warped", warped);
absdiff(im1,warped,diff);
imshow("diff", diff);
waitKey();
return 0;
}
I don't know for sure, so I'm really answering this just because no one else has so far and it's been 10 hours since you asked the question.
My first thought is that you don't have enough point pairs. A homography requires at least 4 pairs, otherwise a unique solution cannot be found. You may want to make sure that you only call findHomography if the number of matches is at least 4.
Alternatively, the questions here and here are about the same failed assertion (caused by calling different functions than yours, though). I'm guessing OpenCV does some form of dynamic type checking or templating such that a type mismatch error that ought to occur at compile time ends up being a run-time error in the form of a failed assertion.
All this to say, maybe you should convert mpts_1 and mpts_2 to cv::Mat before passing in to findHomography.
It's internal OpenCV types problem. findHomography() wants vector < unsigned char > as the last parameter. But drawMatches() requires vector < char > as last one.
I think that on this page a lot of things are explained about brief_match_test.cpp and the ways to correct it.
You can do like this:
vector<char> outlier_mask;
Mat outlier(outlier_mask);
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier);