difference in get a submat in opencv - c++

I have tested the following two snippets and they give a different result. The second one is right. I do not understand why it is and I wonder if there is a bug in opencv?
A result matrix f_sub is different in those examples.
1)
Mat f = Mat::zeros(96,112,CV_8UC1);
randu(f,0,255);
Mat f_sub = f(cv::Rect(17,14,78,68));
//mat2File("f.mm",f,1);
//mat2File("f_sub.mm",f_sub,1);
exit(0);
2)
Mat f = Mat::zeros(96,112,CV_8UC1);
randu(f,0,255);
Mat f_sub = f(cv::Rect(17,14,78,68)).clone();
//mat2File("f.mm",f,1);
//mat2File("f_sub.mm",f_sub,1);
exit(0);
The mat2File is just print a mat into a file
void mat2File(string filename, Mat M, int y)
{
ofstream fout(filename.c_str());
//fout << M.rows<<" "<<M.cols<<endl;
uchar *M_ptr = (uchar*)M.ptr();
for(size_t i=0; i<M.rows; i++)
{
fout<<endl;
for(size_t j=0; j<M.cols; j++)
{
fout<< (size_t)M_ptr[i*M.cols+j]<<" ";
}
}
}

The mat2File seems to be a culprit.
M_ptr[i*M.cols+j] is incorrect for non-continuous matrices because pitch between matrix lines is greater than M.cols. You'd better use M.at<uchar>(y,x) to access Mat pixels.

Related

change image brightness with c++

i want to change brightness of image, only accessing pixel value.
not using opencv function(ex. convertTo)
input : image , num
num means constant value for brightness
here is my code and result looks wierd.
Is there any problem?
original
result
cv::Mat function(cv::Mat img, int num){
cv::Mat output;
output = cv::Mat::zeros(img.rows, img.cols, img.type());
for (int i = 0; i < img.rows; i++)
{
for (int j = 0; j < img.cols; j++)
{
for (int c = 0; c < img.channels(); c++)
{
output.at<cv::Vec3b>(i, j)[c] = img.at<cv::Vec3b>(i, j)[c] + num;
if (output.at<cv::Vec3b>(i, j)[c] > 255){
output.at<cv::Vec3b>(i, j)[c] = 255;
}
else if (output.at<cv::Vec3b>(i, j)[c] < 0)
{
output.at<cv::Vec3b>(i, j)[c] = 0;
}
}
}
}
cv::imshow("output", output);
cv::waitKey(0);
return img;
}
not using opencv function
that's somewhat silly, since your code is already using opencv's data structures.
trying to do so, you also reinvented the wheel, albeit a slightly square one ...
Check for overflow before assigning to output.
yea that's the problem. correct way to do it would be: assign the sum to something larger than uchar, and then check
else if (output.at<cv::Vec3b>(i, j)[c] < 0)
this will never happen, try to understand why
but please note, that your whole code (triple loop, omg !!!) could be rewritten as a simple:
Mat output = img + Scalar::all(num);
(faster, safer, & this will also saturate correctly !)

Input Data Type of K-Means Clustering Method in OpenCV (vector<Mat>)

I have been working on MNIST Dataset to classify the handwritten digit images. I can read the images and calculate histograms of them. Then, I pushed back Mat of histograms into vector. I could not implement K-Means Clustering Algorithm method (kmeans()) because the first argument of method(InputArray) leads to some error
OpenCV 3.0 has been used.
vector<Mat> histogram_list;
//some implementations
int clusterCount = 10;
Mat labels, centers;
int attempts = 5;
kmeans(samplingHist(histogram_list, h_bins, s_bins), clusterCount, labels,
TermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 10000, 0.0001),
attempts, KMEANS_PP_CENTERS, centers);
Is there any advice to solve this problem?
EDIT:
mat.int.hpp file. The error about assertion failed something.... and it returns me to that piece of code.
template<typename _Tp> inline_Tp& Mat::at(int i0, int i1) {
CV_DbgAssert(dims <= 2);
CV_DbgAssert(data);
CV_DbgAssert((unsigned)i0 < (unsigned)size.p[0]);
CV_DbgAssert((unsigned)(i1 * DataType<_Tp>::channels) < (unsigned)(size.p[1] * channels()));
CV_DbgAssert(CV_ELEM_SIZE1(DataType<_Tp>::depth) == elemSize1());
return ((_Tp*)(data + step.p[0] * i0))[i1];
}
I try to solve the problem with sampling to Mat from vector. However it could not help to solve it. Where do I make a mistake?
Mat samplingHist(vector<Mat> &vec, int h, int s) {
Mat samples(vec.size(), h * s, CV_32F);
for (int k = 0; k < vec.size(); k++)
for (int y = 0; y < h; y++)
for (int x = 0; x < s; x++)
samples.at<float>(k, y* s + x) = vec[k].at<float>(y, x);
return samples;
}
You are passing wrong parameters. The documentation of says InputArray data must be of type Mat or at least std::vector<cv::Point2f>, not std::vector<Mat>.
Take a look at this example:
http://docs.opencv.org/3.1.0/de/d63/kmeans_8cpp-example.html

OpenCv Implementing Gaussian Blur

I want to implement Gaussian Blur function in opencv, but when i try i get a noised image. Here is my code :
int main( int argc, char** argv )
{
src = imread( "fruits.jpg", 0 );
gauss3x3 = Mat(src.cols,src.rows,src.type()); //src.clone();
Mat kernelX = getGaussianKernel(3, 1);
Mat kernelY = getGaussianKernel(3, 1);
Mat kernelXY = kernelX * kernelY.t();
filter(src,gauss3x3,kernelXY);
namedWindow( window_name4, WINDOW_AUTOSIZE );
imshow(window_name4,gauss3x3);
}
void filter(Mat src, Mat dst, Mat kernel) {
cout << "filter" << endl;
for(int i=0; i<src.rows - 0; i++) {
for(int j=0; j<src.cols - 0; j++) {
float p = 0;
for(int k=0; k<kernel.rows; k++) {
for(int l=0; l<kernel.cols; l++) {
if(i+k < src.rows && j+l < src.cols) {
p += (src.at<uchar>(i + k,j + l) * kernel.at<uchar>(k,l));
}
}
}
if(i + kernel.rows/2 < src.rows && j + kernel.cols/2 < src.cols) {
dst.at<uchar>(i + kernel.rows/2,j + kernel.cols/2) = p / sum(kernel)[0];
}
}
}
}
I don't have any idea about the solution. Any help would be appreciated.
Change
kernel.at<uchar>(k,l)
to:
kernel.at<double>(k,l)
The point is that the default data type for the kernel is CV_64F, which corresponds to double (documentation).
Also, check that your input image is strictly in grayscale. To support color, you will have to detect the image type in the beginning and differentiate your code. To create the color version of the function, you will have to start by using .at<cv::Vec3b> in place of .at<uchar> (and this of course won't be the only necessary modificatin).
I suggest also that you change your function declaration to:
void filter(const Mat& src, Mat& dst, const Mat& kernel) {
The rest of the algorithm looks ok, apart that it might be pretty slow, it would be much faster if you avoided using the at method, and constrained the for cycles so not to have to check if you are going to touch the image border, and if you stored in some local const variable your different cv::Mat sizes and use them in your loops. Finally, sum(kernel) is constant and therefore you can also precompute that value.

Correctly create training matrix for SVM (OpenCV, C++)

I' m trying to create my own SVM classifier, and i stuck at the beginning. I have two sets of data for positive and negative images, and i want to create training matrix, to pass it into SVM. So i read this fundamental post, and tried to do as it says:
char* path_positive, path_negative;
int numPos, numNeg;
int imageWidth=130;
int imageHeight=160;
numPos= 176;
numNeg= 735;
path_positive= "C:\\SVM\\positive\\";
path_negative= "C:\\SVM\negative\\";
Mat classes(numPos+numNeg, 1, CV_32FC1);
Mat trainingMat(numPos+numNeg, imageWidth*imageHeight, CV_32FC1 );
vector<int> trainingLabels;
for(int file_num=0; file_num < numPos; file_num++)
{
stringstream ss(stringstream::in | stringstream::out);
ss << path_positive << file_num << ".jpg";
Mat img = imread(ss.str(), CV_LOAD_IMAGE_GRAYSCALE);
int ii = 0; // Current column in training_mat
for (int i = 0; i<img.rows; i++) {
for (int j = 0; j < img.cols; j++) {
trainingMat.at<float>(file_num,ii++) = trainingMat.at<uchar>(i,j);
}
}
trainingLabels.push_back(1);
}
But when i start this code i got Assertation failderror:
I know, that i made some stupid mistake, because i'm very new to openCV.
Thanks for any help.
EDIT: The error occurs on this :
trainingMat.at(file_num,ii++) = trainingMat.at(i,j);

OpenCv: How to convert vector<Point> to Mat (CV_8U)

Hi everybody and thanks for your attention, I have the following problem:
I have a vector<Point> which stores a number of coordinates on an image.
I want to:
Create a blank white Mat of type CV_8U
For each Point in the vector of coordinates, I want to mark that Point in black over the white image,
Print the image.
Here is the function iterating over the vector:
void Frag::updateImage(vector<Point> points){
...
if(NewHeight > 0 && NewWidth > 0){
cv::Mat NewImage = cv::Mat(NewHeight, NewWidth, CV_8U, Scalar(255));
// Is this the correct way to initialize a blank Mat of type CV_8U???
for (unsigned int i = 0; i < points.size(); i++) {
uchar* PointPtr = NewImage.ptr<uchar> (points[i].x, points[i].y);
*PointPtr = 0;
}
Utility::DisplayImage(NewImage);
}
...
}
And here is my print function:
void Utility::DisplayImage(Mat& tgtImage) {
namedWindow("Draw Image", (CV_WINDOW_NORMAL | CV_WINDOW_KEEPRATIO));
imshow("Draw Image", tgtImage);
waitKey(0);
}
My problem is the following: it looks like the values are stored into the Matrix (I tried printing them), but the DisplayImage function (which works fine in all the other cases) keeps showing me just blank white images.
What am i missing? Pointer-related issues? Mat initialization issues?
<--- --- --- UPDATE --- --- --->
After the first answers, I found out that the actual issue is that I am not able to set the values in the Mat. I found that because i added a simple loop to print all the values in the Mat (since my Mats are often very small). The loop is the following ( i put it right after the iteration over the vector of Coordinates:
for(int j = 0; j< NewHeight; j++){
for(int i = 0; i< NewWidth; i++){
Logger << (int)NewImage.at<uchar> (i, j) << " ";
}
Logger << endl;
}
And its result is always this:
Creating image with W=2, H=7.
255 255
255 255
255 255
255 255
255 255
255 255
255 255
So the value are just not set, any idea?
Could it be something related to the image type (CV_8U)??
i hope this will help, not tested yet.
Mat NewImage = Mat(NewHeight, NewWidth, CV_8U, Scalar(255));
for(int j = 0; j< NewHeight; j++){
for(int i = 0; i< NewWidth; i++){
for (int k = 0; k < points.size(); k++) {
if(i==points[k].x && j ==points[k].y)
NewImage.at<uchar>(j,i) = 0;
}
}
}
imshow(NewImage);
You could instead do:
for (int i =0; i < points.size(); i++)
{
cv::circle(NewImage, points.at(i), 0, cv::Scalar(0)); //The radius of 0 indicates a single pixel
}
This dispenses with the direct data access and pointer manipulation, and is much more readable.
After a thorough analysis of my code, I detected that the coordinates stored in the vector were relative to a bigger Mat, say Mat OldImage.
The Mat NewImage was meant to store a subset of the Points of Mat OldImage (NewImage is much smaller than OldImage), but with no coordinate conversion from one coordinate system to the other, I was always writing in the wrong position.
I solved the problem by converting the Points to the correct coordinate system using a simple subtraction.