Image Convolution with Multi Channel Kernel - c++

I want to do three-channel image filtering with the help of the C++ OpenCV library. I want to do it with kernels of 3x3 matrix size, each of which is of different value. To do this, I first divided the RGB image into three channels: red, green and blue. Then I defined different kernel matrices for these three channels. Then, after processing them with the help of the filter2d function, the code threw an exception:
Unhandled exception at 0x00007FFAA150A388 in opencvTry.exe: Microsoft
C++ exception: cv::Exception at memory location 0x0000002D4CAF9660.
occurred
What is the reason I can't do it in the code below?
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <typeinfo>
#include <stdlib.h>
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("path\\color_palette.png", IMREAD_COLOR); //load image
int blue_array[159][318];
int green_array[159][318];
int red_array[159][318];
for (int i = 0; i < src.rows; i++) {
for (int j = 0; j < src.cols; j++) {
int a = int(src.at<Vec3b>(i, j).val[0]);
blue_array[i][j] = a;
//cout << blue_array[i][j] << ' ' ;
int b = int(src.at<Vec3b>(i, j).val[1]);
green_array[i][j] = b;
int c = int(src.at<Vec3b>(i, j).val[2]);
red_array[i][j] = c;
}
}
cv::Mat blue_array_mat(159, 318, CV_32S, blue_array);
cv::Mat green_array_mat(159, 318, CV_32S, green_array);
cv::Mat red_array_mat(159, 318, CV_32S, red_array);
float kernelForBlueData[9] = { 1,0,1, 2,0,-2, 1,0,-1};
cv::Mat kernelForBlue(3, 3, CV_32F, kernelForBlueData);
float kernelForGreenData[9] = { 1./16, 2./16, 1./16, 2./16, 4./16,2./16, 1./16, 2./16, 1./16 };
cv::Mat kernelForGreen(3, 3, CV_32F, kernelForGreenData);
float kernelForRedData[9] = { 1./9,1./9, 1./9, 1./9, 1./9,1./9, 1./9, 1./9,1./9 };
cv::Mat kernelForRed(3, 3, CV_32F, kernelForRedData);
//cv::filter2D(blue_array_mat, blue_array_mat, -1, kernelForBlue, Point(-1, -1), 5.0, BORDER_REPLICATE);
filter2D(blue_array_mat, blue_array_mat, 0, kernelForBlue);
imshow("filter", blue_array_mat);
waitKey(0);
return 0;
}

You’re using a constructor for cv::Mat that expects a pointer to data (e.g. int*) but you put an int** into it. This is the reason for the crash, I presume.
Why not create the cv::Mat first and then directly write data into it?
Note the OpenCV has a function that does this for you:
cv::Mat chans[3];
cv::split(src, chans);
//...
cv::filter2D(chans[2], chans[2], 0, kernelForBlue);

Related

OpenCV/C++-Segmentation fault: 11 after mat multiplication

I am a newbie to OpenCV and am trying to duplicate a code from Matlab to C++
and have the Segmentation fault: 11 when I do the matrix multiplication.
I found that it maybe caused by the trans*U in my C++ code.
The size of trans is 16032768x3 (row x col) and U is 3x3, so I am pretty sure that can be multiplied.
Here is the link of my photos:
Photos
Hope someone can help me solve my problem, thanks!!
Here is my C++ code:
#include <opencv2/opencv.hpp>
#include <iostream>
#include <math.h>
using namespace std;
using namespace cv;
int main(int argc, char const *argv[])
{
//import image
Mat_<double> img1, img2, img3;
img1 = imread("S1.jpg", IMREAD_GRAYSCALE);
img2 = imread("S2.jpg", IMREAD_GRAYSCALE);
img3 = imread("S3.jpg", IMREAD_GRAYSCALE);
//Push all the image it to one matrix for SVDecomp
Mat_<double> svd_use;
svd_use.push_back(img1.reshape(0,1));
svd_use.push_back(img2.reshape(0,1));
svd_use.push_back(img3.reshape(0,1));
Mat_<double> source, trans, B, W, U, VT;
trans = svd_use.t();
source = svd_use*trans;
SVDecomp(source, W, U, VT);
//To make sure the value is the same as matlab
W = (Mat_<double>(3,3) << W[0][0], 0, 0, 0, W[1][0], 0, 0, 0,W[2][0]);
U = (Mat_<double>(3,3) << -U[0][0], -U[0][1], U[0][2] , -U[1][0], -U[1][1], U[1][2], -U[2][0], -U[2][1], U[2][2]);
B = trans*U;//<---this part causes the Segmentation fault: 11
return 0;
}
Here is my Matlab code
%Import images
source_img1 = rgb2gray(imread('S1.JPG'));
source_img2 = rgb2gray(imread('S2.JPG'));
source_img3 = rgb2gray(imread('S3.JPG'));
%Vectorize images
img_vector1 = source_img1(:);
img_vector2 = source_img2(:);
img_vector3 = source_img3(:);
%Calculate SVD
t = double([img_vector1'; img_vector2'; img_vector3']);
[U,S,V] = svd(t*t');
B = t'*V*S^(-1/2);
Also, I have question the power of matrix: in Matlab I could directly calculate the S^(1/2),
is there any way to do the same thing in OpenCV?

Visual Studio 2015 community - Function definition not found but can compile

I have a project in visual studio 2015 community. It compiles without any error but I get a green squiggly line under compute_edge_map_via_lab and compute_local_minima which says Function definition for "compute_edge_map_via_lab" not found. I can right click on the line that calls compute_edge_map_via_lab and then I click on "Go to definition" it even brings me to the definition in the cpp file implying that visual studio knows where the function is defined. So I don't understand this green error. Can anyone help me on this ?
I have pasted the function for compute_edge_map_via_lab and the image showing the error.
#include <boost/heap/fibonacci_heap.hpp>
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/iteration_macros.hpp>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <unordered_set>
#include "image-processing.h"
int main {
cv::Mat image = imread("0001.jpg", CV_LOAD_IMAGE_COLOR);
//compute edge map
cv::Mat magnitude;
compute_edge_map_via_lab(image, magnitude);
//compute local minimas
cv::Mat markers;
compute_local_minima(magnitude, markers);
}
image-processing.h
#pragma once
#include <opencv2/opencv.hpp>
void compute_edge_map_via_lab(cv::Mat image, cv::Mat edge_map);
void compute_local_minima(cv::Mat magnitude, cv::Mat markers);
image-processing.cpp
#include "image-processing.h"
void compute_edge_map_via_lab(cv::Mat image, cv::Mat edge_map) {
int rows = image.rows;
int cols = image.cols;
//convert bgr to lab
cv::Mat image_lab;
cv::cvtColor(image, image_lab, CV_BGR2Lab);
//split lab
std::vector<cv::Mat> image_lab_split(3);
cv::split(image_lab, image_lab_split);
//run sobel x and y on lab sets
std::vector<cv::Mat> image_lab_split_dx(3), image_lab_split_dy(3);
for (int i = 0; i < 3; i++)
{
cv::Sobel(image_lab_split[i], image_lab_split_dx[i], CV_32FC1, 1, 0, 3);
cv::Sobel(image_lab_split[i], image_lab_split_dy[i], CV_32FC1, 0, 1, 3);
}
//-----------------------------------------------------------------------------
//compute magnitude = term_a + term_b
// = sqrt(Lx^2 + Ly^2) + sqrt(2(ax^2 + ay^2 + bx^2 + by^2))
//-----------------------------------------------------------------------------
//compute sqrt(Lx^2 + Ly^2)
cv::Mat Lx_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1),
Ly_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1);
cv::pow(image_lab_split_dx[0], 2, Lx_squared);
cv::pow(image_lab_split_dy[0], 2, Ly_squared);
//compute term_a
cv::Mat term_a = cv::Mat(cv::Size(cols, rows), CV_32FC1);
term_a = Lx_squared + Ly_squared;
cv::sqrt(term_a, term_a);
//compute sqrt(2(ax^2 + ay^2 + bx^2 + by^2))
cv::Mat ax_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1),
ay_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1),
bx_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1),
by_squared = cv::Mat(cv::Size(cols, rows), CV_32FC1);
cv::pow(image_lab_split_dx[1], 2, ax_squared);
cv::pow(image_lab_split_dy[1], 2, ay_squared);
cv::pow(image_lab_split_dx[2], 2, bx_squared);
cv::pow(image_lab_split_dy[2], 2, by_squared);
//compute term_b
cv::Mat term_b = 2 * (ax_squared + ay_squared + bx_squared + by_squared);
cv::sqrt(term_b, term_b);
//compute magnitude
edge_map = term_a + term_b;
}
void compute_local_minima(cv::Mat magnitude, cv::Mat markers) {
}
As far as the C++ standard is concerned, it doesn't matter: you can declare function prototypes without definitions so long as the functions are not called.
In the old days before C++11 this was even exploited: e.g. The introduction of a default constructor prototype to suppress unwanted construction.
Such cases are difficult for intellisense to spot - and perhaps it's a good thing that is does highlight them. (By the way, intellisense uses a different lexical analyser to the actual compiler!)

OpenCV Harris Corner Detection crashes

I'm trying to use Harris Corner detection algorithm of OpenCV to find corners in an image. I want to track it across consecutive frames using Lucas-Kanade Pyramidal Optical flow.
I have this C++ code, which doesn't seem to work for some reason:
#include <stdio.h>
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
void main()
{
Mat img1, img2;
Mat disp1, disp2;
int thresh = 200;
vector<Point2f> left_corners;
vector<Point2f> right_corners;
vector<unsigned char> status;
vector<float> error;
Size s;
s.height = 400;
s.width = 400;
img1 = imread("D:\\img_l.jpg",0);
img2 = imread("D:\\img_r.jpg",0);
resize(img2, img2, s, 0, 0, INTER_CUBIC);
resize(img1, img1, s, 0, 0, INTER_CUBIC);
disp1 = Mat::zeros( img1.size(), CV_32FC1 );
disp2 = Mat::zeros( img2.size(), CV_32FC1 );
int blockSize = 2;
int apertureSize = 3;
double k = 0.04;
cornerHarris( img1, disp1, blockSize, apertureSize, k, BORDER_DEFAULT );
normalize( disp1, disp1, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
for( int j = 0; j < disp1.size().height ; j++ )
{
for( int i = 0; i < disp1.size().width; i++ )
{
if( (int) disp1.at<float>(j,i) > thresh )
{
left_corners.push_back(Point2f( j, i ));
}
}
}
right_corners.resize(left_corners.size());
calcOpticalFlowPyrLK(img1,img2,left_corners,right_corners,status,error, Size(11,11),5);
printf("Vector size : %d",left_corners.size());
waitKey(0);
}
When I run it, I get the following error message:
Microsoft Visual Studio C Runtime Library has detected a fatal error in OpenCVTest.exe.
(OpenCVTest being the name of my project)
OpenCV Error: Assertion failed ((npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0) in unknown function, file ..\..\OpenCV-2.3.0-win-src\OpenCV-2.3.0\modules\video\src\lkpyramid.cpp, line 71
I have been trying to debug this from yesterday, but in vain. Please help.
As we can see in the source code, this error is thrown if the previous points array is in someway faulty. Exactly what makes it bad is hard to say since the documentation for checkVector is a bit sketchy. You can still look at the code to find out.
But my guess is that your left_corners variable have either the wrong type (not CV_32F) or the wrong shape.

OpenCV - C++ Code runs in Eclipse but not in terminal?

I am trying to make the follwing Code by Mohammad Reza Mostajabi (http://alum.sharif.ir/~mostajabi/Tutorial.html) run under Ubuntu 12.04 with OpenCV 2.4.6.1. I made some minor changes with the libraries included and added "cv::initModule_nonfree()" right after starting the main file.
#include "cv.h"
#include "highgui.h"
#include "ml.h"
#include <stdio.h>
#include <iostream>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <vector>
using namespace cv;
using namespace std;
using std::cout;
using std::cerr;
using std::endl;
using std::vector;
char ch[30];
//--------Using SURF as feature extractor and FlannBased for assigning a new point to the nearest one in the dictionary
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
Ptr<DescriptorExtractor> extractor = new SurfDescriptorExtractor();
SurfFeatureDetector detector(500);
//---dictionary size=number of cluster's centroids
int dictionarySize = 1500;
TermCriteria tc(CV_TERMCRIT_ITER, 10, 0.001);
int retries = 1;
int flags = KMEANS_PP_CENTERS;
BOWKMeansTrainer bowTrainer(dictionarySize, tc, retries, flags);
BOWImgDescriptorExtractor bowDE(extractor, matcher);
void collectclasscentroids() {
IplImage *img;
int i,j;
for(j=1;j<=4;j++)
for(i=1;i<=60;i++){
sprintf( ch,"%s%d%s%d%s","train/",j," (",i,").jpg");
const char* imageName = ch;
img = cvLoadImage(imageName,0);
vector<KeyPoint> keypoint;
detector.detect(img, keypoint);
Mat features;
extractor->compute(img, keypoint, features);
bowTrainer.add(features);
}
return;
}
int main(int argc, char* argv[])
{
cv::initModule_nonfree();
int i,j;
IplImage *img2;
cout<<"Vector quantization..."<<endl;
collectclasscentroids();
vector<Mat> descriptors = bowTrainer.getDescriptors();
int count=0;
for(vector<Mat>::iterator iter=descriptors.begin();iter!=descriptors.end();iter++)
{
count+=iter->rows;
}
cout<<"Clustering "<<count<<" features"<<endl;
//choosing cluster's centroids as dictionary's words
Mat dictionary = bowTrainer.cluster();
bowDE.setVocabulary(dictionary);
cout<<"extracting histograms in the form of BOW for each image "<<endl;
Mat labels(0, 1, CV_32FC1);
Mat trainingData(0, dictionarySize, CV_32FC1);
int k=0;
vector<KeyPoint> keypoint1;
Mat bowDescriptor1;
//extracting histogram in the form of bow for each image
for(j=1;j<=4;j++)
for(i=1;i<=60;i++){
sprintf( ch,"%s%d%s%d%s","train/",j," (",i,").jpg");
const char* imageName = ch;
img2 = cvLoadImage(imageName,0);
detector.detect(img2, keypoint1);
bowDE.compute(img2, keypoint1, bowDescriptor1);
trainingData.push_back(bowDescriptor1);
labels.push_back((float) j);
}
//Setting up SVM parameters
CvSVMParams params;
params.kernel_type=CvSVM::RBF;
params.svm_type=CvSVM::C_SVC;
params.gamma=0.50625000000000009;
params.C=312.50000000000000;
params.term_crit=cvTermCriteria(CV_TERMCRIT_ITER,100,0.000001);
CvSVM svm;
printf("%s\n","Training SVM classifier");
bool res=svm.train(trainingData,labels,cv::Mat(),cv::Mat(),params);
cout<<"Processing evaluation data..."<<endl;
Mat groundTruth(0, 1, CV_32FC1);
Mat evalData(0, dictionarySize, CV_32FC1);
k=0;
vector<KeyPoint> keypoint2;
Mat bowDescriptor2;
Mat results(0, 1, CV_32FC1);;
for(j=1;j<=4;j++)
for(i=1;i<=60;i++){
sprintf( ch,"%s%d%s%d%s","eval/",j," (",i,").jpg");
const char* imageName = ch;
img2 = cvLoadImage(imageName,0);
detector.detect(img2, keypoint2);
bowDE.compute(img2, keypoint2, bowDescriptor2);
evalData.push_back(bowDescriptor2);
groundTruth.push_back((float) j);
float response = svm.predict(bowDescriptor2);
results.push_back(response);
}
//calculate the number of unmatched classes
double errorRate = (double) countNonZero(groundTruth- results) / evalData.rows;
printf("%s%f","Error rate is ",errorRate);
return 0;
}
After doing this I can compile the Code without problems. I can also run it within Eclipse, but once I try to make it work in terminal I get the following error message:
" OpenCV Error: Assertion failed (!_descriptors.empty()) in add, file /home/mark/Downloads/FP/opencv-2.4.6.1/modules/features2d/src/bagofwords.cpp, line 57
terminate called after throwing an instance of 'cv::Exception'
what(): /home/mark/Downloads/FP/opencv-2.4.6.1/modules/features2d/src/bagofwords.cpp:57: error: (-215) !_descriptors.empty() in function add "
I've been trying to solve the problem for a few days now, but I just cannot get rid of this error. I also tried to do it with CodeBlocks, which gives me the same error. I would appreciate some help very much!
Thanks!
It's possible that your program fails to load input images (when launched from the terminal window) because it can't find them. Make sure that your input images are copied to the directory from which you run the application. Eclipse may have a different home directory and hence it sees the image when the program is started in Eclipse.

Exception thrown for cvCalcOpticalFlowHS() opencv

Heey, I'm trying to sort out the function of Optical Flow of openCV, but for some reason I'm getting an exception in visual studio:
Unhandled exception at 0x772615de in Optical_flow.exe: Microsoft C++ exception: cv::Exception at memory location 0x0036f334..
With breakpoints I found out that the error lies within the cvCalcOpticalFlowHS function.
I'm using openCV 2.1
#include <cv.h>
#include <highgui.h>
using namespace cv;
int init() {
return 0;
}
int main(int argc, char **args) {
CvCapture* capture = cvCaptureFromFile("Video/Wildlife.wmv");
double fps = cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
CvSize size;
size.width = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH);
size.height = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT);
CvVideoWriter* writer = cvCreateVideoWriter("result.avi", 0, fps,size, 1);
IplImage* curFrame = cvQueryFrame(capture);
Mat u = Mat(size, CV_32FC2);
Mat v = Mat(size, CV_32FC2);
CvTermCriteria IterCriteria;
IterCriteria.type = CV_TERMCRIT_ITER | CV_TERMCRIT_EPS;
IterCriteria.max_iter = 500;
IterCriteria.epsilon = 0.01;
while(1) {
IplImage* nextFrame = cvQueryFrame(capture);
if(!nextFrame) break;
u = Mat::zeros(size, CV_32FC2);
v = Mat::zeros(size, CV_32FC2);
/* Do optical flow computation */
cvCalcOpticalFlowHS(&curFrame, &nextFrame, 0, &u, &v, 0.01, IterCriteria);
cvWriteFrame(writer, curFrame);
curFrame = nextFrame;
}
cvReleaseVideoWriter(&writer);
cvReleaseCapture(&capture);
return 0;
}
Anyone has seen this problem before or sees the mistake I made?
Best Regards
Remco
From the documentation, curFrame and nextFrame should be 8-bit single channel. You are currently just pulling these from the loaded file without checking/converting them as necessary. Can you confirm that the input is of the right type?
Also you have a nasty mix of C++ style cv::Mat with C style IplImage*. I'd suggest you upgrade to a more recent version of OpenCV (2.4 has recently been released), and try to stick with the one or other of the C++ or C style methods.
Note also that this optical flow method is classed as obsolete with a recommendation to use either calcOpticalFlowPyrLK() for sparse features or calcOpticalFlowFarneback() for dense features.
Below is some example code demonstrating calcOpticalFlowFarneback(), which is what I believe you are trying to achieve. It takes data from the webcam rather than a file.
#include <opencv2/opencv.hpp>
using namespace cv;
void drawOptFlowMap(const cv::Mat& flow,
cv::Mat& cflowmap,
int step,
const cv::Scalar& color
)
{
for(int y = 0; y < cflowmap.rows; y += step)
for(int x = 0; x < cflowmap.cols; x += step)
{
const cv::Point2f& fxy = flow.at<cv::Point2f>(y, x);
cv::line(cflowmap,
cv::Point(x,y),
cv::Point(cvRound(x+fxy.x),cvRound(y+fxy.y)),
color);
cv::circle(cflowmap, cv::Point(x,y), 2, color, -1);
}
}
int main(int argc, char **args) {
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat newFrame, newGray, prevGray;
cap >> newFrame; // get a new frame from camera
cvtColor(newFrame, newGray, CV_BGR2GRAY);
prevGray = newGray.clone();
double pyr_scale = 0.5;
int levels = 3;
int winsize = 5;
int iterations = 5;
int poly_n = 5;
double poly_sigma = 1.1;
int flags = 0;
while(1) {
cap >> newFrame;
if(newFrame.empty()) break;
cvtColor(newFrame, newGray, CV_BGR2GRAY);
Mat flow = Mat(newGray.size(), CV_32FC2);
/* Do optical flow computation */
calcOpticalFlowFarneback(
prevGray,
newGray,
flow,
pyr_scale,
levels,
winsize,
iterations,
poly_n,
poly_sigma,
flags
);
drawOptFlowMap(flow, newFrame, 20, CV_RGB(0,255,0));
namedWindow("Output",1);
imshow("Output", newFrame);
waitKey(1);
prevGray = newGray.clone();
}
return 0;
}
The above code is pretty similar to the fback.cpp sample code which comes with OpenCV.