OpenCV/C++-Segmentation fault: 11 after mat multiplication - c++

I am a newbie to OpenCV and am trying to duplicate a code from Matlab to C++
and have the Segmentation fault: 11 when I do the matrix multiplication.
I found that it maybe caused by the trans*U in my C++ code.
The size of trans is 16032768x3 (row x col) and U is 3x3, so I am pretty sure that can be multiplied.
Here is the link of my photos:
Photos
Hope someone can help me solve my problem, thanks!!
Here is my C++ code:
#include <opencv2/opencv.hpp>
#include <iostream>
#include <math.h>
using namespace std;
using namespace cv;
int main(int argc, char const *argv[])
{
//import image
Mat_<double> img1, img2, img3;
img1 = imread("S1.jpg", IMREAD_GRAYSCALE);
img2 = imread("S2.jpg", IMREAD_GRAYSCALE);
img3 = imread("S3.jpg", IMREAD_GRAYSCALE);
//Push all the image it to one matrix for SVDecomp
Mat_<double> svd_use;
svd_use.push_back(img1.reshape(0,1));
svd_use.push_back(img2.reshape(0,1));
svd_use.push_back(img3.reshape(0,1));
Mat_<double> source, trans, B, W, U, VT;
trans = svd_use.t();
source = svd_use*trans;
SVDecomp(source, W, U, VT);
//To make sure the value is the same as matlab
W = (Mat_<double>(3,3) << W[0][0], 0, 0, 0, W[1][0], 0, 0, 0,W[2][0]);
U = (Mat_<double>(3,3) << -U[0][0], -U[0][1], U[0][2] , -U[1][0], -U[1][1], U[1][2], -U[2][0], -U[2][1], U[2][2]);
B = trans*U;//<---this part causes the Segmentation fault: 11
return 0;
}
Here is my Matlab code
%Import images
source_img1 = rgb2gray(imread('S1.JPG'));
source_img2 = rgb2gray(imread('S2.JPG'));
source_img3 = rgb2gray(imread('S3.JPG'));
%Vectorize images
img_vector1 = source_img1(:);
img_vector2 = source_img2(:);
img_vector3 = source_img3(:);
%Calculate SVD
t = double([img_vector1'; img_vector2'; img_vector3']);
[U,S,V] = svd(t*t');
B = t'*V*S^(-1/2);
Also, I have question the power of matrix: in Matlab I could directly calculate the S^(1/2),
is there any way to do the same thing in OpenCV?

Related

convert octave Matrix to cv::Mat in oct file

I wrote a simple Oct file to wrap an OpenCV function. This is my code:-
#include <octave/oct.h>
#include <opencv2/imgproc.hpp>
DEFUN_DLD (cornerHarris, args, , "Harris Corner Detector")
{
// Processing arguments
if(args.length()<4){
print_usage();
}
Matrix octInMat = args(0).matrix_value();
int blockSize = args(1).int_value();
int kSize = args(2).int_value();
double k = args(3).double_value();
int borderType = args(4).int_value();
// Dimentions
dim_vector dims = octInMat.dims();
int h = dims.elem(0);
int w = dims.elem(1);
// OpenCV Matrix
cv::Mat cvInMat = cv::Mat::zeros(h,w, CV_8U);
cv::Mat cvOutMat = cv::Mat::zeros(h,w, CV_32FC1);
// Converting Octave Matrix to OpenCV Matrix
for (int r=0;r<h;r++)
{
for(int s=0;s<w;s++)
{
cvInMat.at<int>(r,s) = octInMat(r,s);
}
}
cv::cornerHarris( cvInMat, cvOutMat, blockSize, kSize, k, borderType );
// Converting OpenCV Matrix to Octave Matrix
Matrix octOutMat = Matrix(dim_vector(h,w));
for (int r=0;r<h;r++)
{
for(int s=0;s<w;s++)
{
octOutMat(r,s) = cvOutMat.at<double>(r,s);
}
}
return octave_value(octOutMat);
}
But I am getting a segmentation error when the value of w variable increased. Is there any short way to convert the matrices without looping? Or is there a way to resolve the segmentation error?
Documentations:-
octave::Matrix
cv::Mat
I figured it out by commenting line by line in my code. The issue was occurred from this line because of a type casting issue.
cvInMat.at<int>(r,s) = octInMat(r,s);
I changed this as following.
cvInMat.at<uchar>(r,s) = (uchar)octInMat(r,s);
This answer helped me to fix it.

Image Convolution with Multi Channel Kernel

I want to do three-channel image filtering with the help of the C++ OpenCV library. I want to do it with kernels of 3x3 matrix size, each of which is of different value. To do this, I first divided the RGB image into three channels: red, green and blue. Then I defined different kernel matrices for these three channels. Then, after processing them with the help of the filter2d function, the code threw an exception:
Unhandled exception at 0x00007FFAA150A388 in opencvTry.exe: Microsoft
C++ exception: cv::Exception at memory location 0x0000002D4CAF9660.
occurred
What is the reason I can't do it in the code below?
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <typeinfo>
#include <stdlib.h>
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("path\\color_palette.png", IMREAD_COLOR); //load image
int blue_array[159][318];
int green_array[159][318];
int red_array[159][318];
for (int i = 0; i < src.rows; i++) {
for (int j = 0; j < src.cols; j++) {
int a = int(src.at<Vec3b>(i, j).val[0]);
blue_array[i][j] = a;
//cout << blue_array[i][j] << ' ' ;
int b = int(src.at<Vec3b>(i, j).val[1]);
green_array[i][j] = b;
int c = int(src.at<Vec3b>(i, j).val[2]);
red_array[i][j] = c;
}
}
cv::Mat blue_array_mat(159, 318, CV_32S, blue_array);
cv::Mat green_array_mat(159, 318, CV_32S, green_array);
cv::Mat red_array_mat(159, 318, CV_32S, red_array);
float kernelForBlueData[9] = { 1,0,1, 2,0,-2, 1,0,-1};
cv::Mat kernelForBlue(3, 3, CV_32F, kernelForBlueData);
float kernelForGreenData[9] = { 1./16, 2./16, 1./16, 2./16, 4./16,2./16, 1./16, 2./16, 1./16 };
cv::Mat kernelForGreen(3, 3, CV_32F, kernelForGreenData);
float kernelForRedData[9] = { 1./9,1./9, 1./9, 1./9, 1./9,1./9, 1./9, 1./9,1./9 };
cv::Mat kernelForRed(3, 3, CV_32F, kernelForRedData);
//cv::filter2D(blue_array_mat, blue_array_mat, -1, kernelForBlue, Point(-1, -1), 5.0, BORDER_REPLICATE);
filter2D(blue_array_mat, blue_array_mat, 0, kernelForBlue);
imshow("filter", blue_array_mat);
waitKey(0);
return 0;
}
You’re using a constructor for cv::Mat that expects a pointer to data (e.g. int*) but you put an int** into it. This is the reason for the crash, I presume.
Why not create the cv::Mat first and then directly write data into it?
Note the OpenCV has a function that does this for you:
cv::Mat chans[3];
cv::split(src, chans);
//...
cv::filter2D(chans[2], chans[2], 0, kernelForBlue);

OpenCV Harris Corner Detection crashes

I'm trying to use Harris Corner detection algorithm of OpenCV to find corners in an image. I want to track it across consecutive frames using Lucas-Kanade Pyramidal Optical flow.
I have this C++ code, which doesn't seem to work for some reason:
#include <stdio.h>
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
void main()
{
Mat img1, img2;
Mat disp1, disp2;
int thresh = 200;
vector<Point2f> left_corners;
vector<Point2f> right_corners;
vector<unsigned char> status;
vector<float> error;
Size s;
s.height = 400;
s.width = 400;
img1 = imread("D:\\img_l.jpg",0);
img2 = imread("D:\\img_r.jpg",0);
resize(img2, img2, s, 0, 0, INTER_CUBIC);
resize(img1, img1, s, 0, 0, INTER_CUBIC);
disp1 = Mat::zeros( img1.size(), CV_32FC1 );
disp2 = Mat::zeros( img2.size(), CV_32FC1 );
int blockSize = 2;
int apertureSize = 3;
double k = 0.04;
cornerHarris( img1, disp1, blockSize, apertureSize, k, BORDER_DEFAULT );
normalize( disp1, disp1, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
for( int j = 0; j < disp1.size().height ; j++ )
{
for( int i = 0; i < disp1.size().width; i++ )
{
if( (int) disp1.at<float>(j,i) > thresh )
{
left_corners.push_back(Point2f( j, i ));
}
}
}
right_corners.resize(left_corners.size());
calcOpticalFlowPyrLK(img1,img2,left_corners,right_corners,status,error, Size(11,11),5);
printf("Vector size : %d",left_corners.size());
waitKey(0);
}
When I run it, I get the following error message:
Microsoft Visual Studio C Runtime Library has detected a fatal error in OpenCVTest.exe.
(OpenCVTest being the name of my project)
OpenCV Error: Assertion failed ((npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0) in unknown function, file ..\..\OpenCV-2.3.0-win-src\OpenCV-2.3.0\modules\video\src\lkpyramid.cpp, line 71
I have been trying to debug this from yesterday, but in vain. Please help.
As we can see in the source code, this error is thrown if the previous points array is in someway faulty. Exactly what makes it bad is hard to say since the documentation for checkVector is a bit sketchy. You can still look at the code to find out.
But my guess is that your left_corners variable have either the wrong type (not CV_32F) or the wrong shape.

Applying SVD to YCbCr image in OpenCV

I am trying to watermark an image into a video sequence. The process requires decomposition of frames into SVD which I am trying to achieve using the partial code below. The SVD constructor at line 47 fails with a segmentation fault.
gdb reports the following error:
"Program received signal SIGSEGV, Segmentation fault.
0xb5d31ada in dlange_ () from /usr/lib/liblapack.so.3gf"
#include <iostream>
#include <stdio.h>
#include "cv.h"
#include "highgui.h"
const unsigned int MAX = 10000;
using namespace cv;
using namespace std;
int NO_FRAMES;
bool check_exit()
{
return (waitKey(27) > 0)?true:false;
}
int main(int argc, char ** argv)
{
Mat rgb[MAX];
Mat ycbcr[MAX];
Mat wm_rgb[MAX];
namedWindow("watermark",1);
namedWindow("RGB", 1);
namedWindow("YCBCR",1);
VideoCapture capture(argv[1]);
Mat watermark = imread(argv[2]);
int i=0;
capture >> rgb[i];
imshow("watermark", watermark);
while(!rgb[i].empty())
{
imshow("RGB", rgb[i]);
cvtColor(rgb[i], ycbcr[i], CV_RGB2YCrCb);
imshow("YCBCR", ycbcr[i]);
i++;
capture >> rgb[i];
cout<<"frame "<<i<<endl;
if(check_exit())
exit(0);
}
//This line creates Segmentation fault
SVD temp(rgb[0]);
capture.release();
return 0;
}
Being more familiar with the C interface, I'll just describe a few things that seem out of place in your C++ code:
The SVD() function expects the input image to be a floating point image, so you may need to convert scale to 32-bit from the standard 8-bit. Here's some very basic (and not very efficient) code for illustration purposes:
int N = img->width;
IplImage* W = cvCreateImage( cvSize(N, 1), IPL_DEPTH_32F , 1 );
IplImage* A = cvCreateImage( cvGetSize(img), IPL_DEPTH_32F , 1 );
cvConvertScale(img, A);
IplImage* W_mod = cvCreateImage( cvSize(N-l, 1), IPL_DEPTH_32F , 1 );
cvSVD(A, W, NULL, NULL, CV_SVD_MODIFY_A);
The SVD values are stored in the Nx1 matrix (IplImage in this case) named W. The input image img is converted to 32-bit in A. We used the CV_SVD_MODIFY_A flag to make it faster and modify values in A. The other options were left blank (NULL), but you can supply parameters as needed. Please check the OpenCV docs for those.
Hopefully you'll be able to figure out from this working code what was wrong in your C++ code.

Knowing a pixel value after making an RGBtoHSV conversion OpenCv

Im trying to get the H,S and V Values of an image, so i convert an RGB image to HSV, and then just ask for the desired values, and then print them.. Im not quite sure im making this right, because when printing the Value (V of hsV) i get values of 100+ and i understand that the V just goes to 0-100, maybe im not using a correct method, here's the code:
#include "opencv/highgui.h"
#include "opencv/cv.h"
#include <cstdlib>
#include <iostream>
#include <stdio.h>
using namespace std;
int main(int argc, char** argv) {
int i=0,total=0;
IplImage* img = cvLoadImage( argv[1] );
IplImage* hsv;
CvSize size;
int key = 0, depth;
size = cvGetSize(img);
depth = img->depth;
hsv = cvCreateImage(size, depth, 3);
cvCvtColor( img, hsv, CV_BGR2HSV );
for(i=0;i<480;i++){ //asking for the values in \ form (1,1)(2,2),...(480,480)
CvScalar s;
s = cvGet2D(hsv,i,i);
printf("s=%f\n,s.val[2]); //s.val[2] equals to hs**V** right?
}
cvReleaseImage(&img);
cvReleaseImage(&val);
return 0;
}
The other answer here is correct but here is a code snippet that I have to calculate the V channel in opencv. I get the value from the Gimp app and this function gives me the opencv value.
//Max values: App HSV H=360 S=100 V=100 OpenCV H=180 S=255 V=255
double newHSV(double value)
{
//new_val = value * opencv_max_range / other_app_max_range
double newValue = value * 255 / 100;
return newValue;
}
To check your opencv HSV values in another application like Gimp, just calculate the formula to:
gimp_value = opencv_value * other_app_max_range / opencv_max_range
The way you're doing it is correct. Just that values are a little different.
H should ideally go from 0-360. But because a byte can only hold 0-255, H values are halved. So the range is 0-180.
V and S use the full range of 0-255 to specify value and saturation.
You can read more about it here: http://opencv.willowgarage.com/documentation/python/miscellaneous_image_transformations.html#cvtcolor