I wrote a simple Oct file to wrap an OpenCV function. This is my code:-
#include <octave/oct.h>
#include <opencv2/imgproc.hpp>
DEFUN_DLD (cornerHarris, args, , "Harris Corner Detector")
{
// Processing arguments
if(args.length()<4){
print_usage();
}
Matrix octInMat = args(0).matrix_value();
int blockSize = args(1).int_value();
int kSize = args(2).int_value();
double k = args(3).double_value();
int borderType = args(4).int_value();
// Dimentions
dim_vector dims = octInMat.dims();
int h = dims.elem(0);
int w = dims.elem(1);
// OpenCV Matrix
cv::Mat cvInMat = cv::Mat::zeros(h,w, CV_8U);
cv::Mat cvOutMat = cv::Mat::zeros(h,w, CV_32FC1);
// Converting Octave Matrix to OpenCV Matrix
for (int r=0;r<h;r++)
{
for(int s=0;s<w;s++)
{
cvInMat.at<int>(r,s) = octInMat(r,s);
}
}
cv::cornerHarris( cvInMat, cvOutMat, blockSize, kSize, k, borderType );
// Converting OpenCV Matrix to Octave Matrix
Matrix octOutMat = Matrix(dim_vector(h,w));
for (int r=0;r<h;r++)
{
for(int s=0;s<w;s++)
{
octOutMat(r,s) = cvOutMat.at<double>(r,s);
}
}
return octave_value(octOutMat);
}
But I am getting a segmentation error when the value of w variable increased. Is there any short way to convert the matrices without looping? Or is there a way to resolve the segmentation error?
Documentations:-
octave::Matrix
cv::Mat
I figured it out by commenting line by line in my code. The issue was occurred from this line because of a type casting issue.
cvInMat.at<int>(r,s) = octInMat(r,s);
I changed this as following.
cvInMat.at<uchar>(r,s) = (uchar)octInMat(r,s);
This answer helped me to fix it.
Related
I have a cv::Mat of a RGB image as
cv::Mat cv_img
I want to set zeros value for cv_img at some positions. For example from the bottom to the half location of the image will be filled by zero values. How can I do it in c++ and opencv? Thanks all.
I have searched a setTo function and mask may be a candidate solution, but how to define a binary mask is difficult for me.
cv_img.setTo(Scalar(0,0,0), mask);
You can achieve it by setting the pixels with a desired value. Just define the intervals of roi(region of interest.
Here is a simple code to guide:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img = imread("/ur/img/dir/img.jpg");
for(int i=img.rows/2; i<img.rows;i++)
{
for(int j=0; j<img.cols; j++)
{
img.at<Vec3b>(Point(j,i))[0] = 0;
img.at<Vec3b>(Point(j,i))[1] = 0;
img.at<Vec3b>(Point(j,i))[2] = 0;
}
}
imshow("Result",img);
waitKey(0);
return 0;
}
You can try this:
int w = cv_img.cols;
int h = cv_img.rows;
cv::Rect rectZero(0, h/2, w, h/2);
cv_img(rectZero) = cv::Scalar(0,0,0);
I am a newbie to OpenCV and am trying to duplicate a code from Matlab to C++
and have the Segmentation fault: 11 when I do the matrix multiplication.
I found that it maybe caused by the trans*U in my C++ code.
The size of trans is 16032768x3 (row x col) and U is 3x3, so I am pretty sure that can be multiplied.
Here is the link of my photos:
Photos
Hope someone can help me solve my problem, thanks!!
Here is my C++ code:
#include <opencv2/opencv.hpp>
#include <iostream>
#include <math.h>
using namespace std;
using namespace cv;
int main(int argc, char const *argv[])
{
//import image
Mat_<double> img1, img2, img3;
img1 = imread("S1.jpg", IMREAD_GRAYSCALE);
img2 = imread("S2.jpg", IMREAD_GRAYSCALE);
img3 = imread("S3.jpg", IMREAD_GRAYSCALE);
//Push all the image it to one matrix for SVDecomp
Mat_<double> svd_use;
svd_use.push_back(img1.reshape(0,1));
svd_use.push_back(img2.reshape(0,1));
svd_use.push_back(img3.reshape(0,1));
Mat_<double> source, trans, B, W, U, VT;
trans = svd_use.t();
source = svd_use*trans;
SVDecomp(source, W, U, VT);
//To make sure the value is the same as matlab
W = (Mat_<double>(3,3) << W[0][0], 0, 0, 0, W[1][0], 0, 0, 0,W[2][0]);
U = (Mat_<double>(3,3) << -U[0][0], -U[0][1], U[0][2] , -U[1][0], -U[1][1], U[1][2], -U[2][0], -U[2][1], U[2][2]);
B = trans*U;//<---this part causes the Segmentation fault: 11
return 0;
}
Here is my Matlab code
%Import images
source_img1 = rgb2gray(imread('S1.JPG'));
source_img2 = rgb2gray(imread('S2.JPG'));
source_img3 = rgb2gray(imread('S3.JPG'));
%Vectorize images
img_vector1 = source_img1(:);
img_vector2 = source_img2(:);
img_vector3 = source_img3(:);
%Calculate SVD
t = double([img_vector1'; img_vector2'; img_vector3']);
[U,S,V] = svd(t*t');
B = t'*V*S^(-1/2);
Also, I have question the power of matrix: in Matlab I could directly calculate the S^(1/2),
is there any way to do the same thing in OpenCV?
I'm trying to convolve an image using FFT. I use openCV so images are in Mat containers. I convert color image to gray image, then add a second channel for imaginary numbers that is all zero. Then I take this 2-channel Mat and convolve it with Prewitt's kernel. I get a result very different from the result I get when I use normal convolution algorithm. Left image is the output I get using FFT and right image is the output of normal convolution.
Below is the pseudo algorithm of how I do the operation;
Convert image Mat and kernel Mat to complex Mats by adding second channel (Result Mat type is CV_32FC2)
Assign all Mat elements to complex vectors
Zero pad the vectors to the same next power of 2
FFT the vectors
Signal multiply both vectors elementwise and assign result to result vector
Inverse FFT the result vector
Convert result vector to Mat
I think FFT algorithm is not the problem because when I take an image, FFT it, then inverse FFT it, I get the original image just fine. But I could be wrong. So here is the FFT algorithm. Notice how there are two of them. I use the second one. I also tried other FFT algorithms and they all output the same. FFT'ing and IFFT'ing same image only skips the signal multiplication step above. So I think that's where the problem is. Here is the code of the operation;
std::vector<cf> signalMultiplication(std::vector<cf> lh, std::vector<cf> rh) {
std::vector<cf> imVec = lh, kerVec = rh, resultVec;
resultVec.resize(imVec.size());
std::transform(imVec.begin(), imVec.end(), kerVec.begin(), resultVec.begin(), std::multiplies<cf>());
return resultVec;
}
I tried multiplying them using for loop but result was the same. I don't know the problem and I can't type the whole code here since it is too long, so tell me where you think the problem is and I'll give the code of that part.
#Paul below is the main body of the code;
cv::Mat convolution2D(cv::Mat image, cv::Mat kernel) {
cv::Mat imMat, kerMat;
imMat = convertToComplexMat(image);
kerMat = convertToComplexMat(kernel);
std::vector<cf> imVec, kerVec, resultVec;
imVec = matElementsToVector<cf>(imMat);
kerVec = matElementsToVector<cf>(kerMat);
float power = log2f(imVec.size());
if (abs(power - (int)power) == 0)
power++;
else
power = ceil(power);
zeroPadding(imVec, power);
zeroPadding(kerVec, power);
//FFT code I linked takes valarray as argument so I convert vectors to valarray and back
std::valarray<cf> imCArr(imVec.data(), imVec.size());
std::valarray<cf> kerCArr(kerVec.data(), kerVec.size());
fftRosetta(imCArr);
fftRosetta(kerCArr);
imVec.assign(std::begin(imCArr), std::end(imCArr));
kerVec.assign(std::begin(kerCArr), std::end(kerCArr));
resultVec = signalMultiplication(imVec, kerVec);
std::valarray<cf> resCArr(resultVec.data(), resultVec.size());
ifftRosetta(resCArr);
resultVec.assign(std::begin(resCArr), std::end(resCArr));
cv::Mat resultMat;
resultMat = vectorToMatElementsRowMajor(resultVec, imMat.rows, imMat.cols, imMat.type());
std::vector<cv::Mat> matVec;
cv::split(resultMat, matVec);
return matVec[0]; }
These are the custom functions;
convertToComplexMat, matElementsToVector, zeroPadding, fftRosetta, ifftRosetta, signalMultiplication, vectorToMatElementsRowMajor
signalMultiplication is posted, fftRosetta and ifftRosetta are linked so here, the rest of the functions;
using cf = std::complex<float>;
cv::Mat convertToComplexMat(cv::Mat imageMat) {
cv::Mat matOper;
if (imageMat.channels() == 3)
cv::cvtColor(imageMat, matOper, cv::COLOR_BGR2GRAY);
else
matOper = imageMat.clone();
matOper.convertTo(matOper, CV_32FC1);
cv::Mat compChannel = cv::Mat::zeros(matOper.rows, matOper.cols, CV_32FC1);
std::vector<cv::Mat> channels;
channels.push_back(matOper);
channels.push_back(compChannel);
cv::merge(channels, matOper);
return matOper;
}
template <typename T>
std::vector<T> matElementsToVector(cv::Mat operand) {
std::vector<T> vecOper;
int cn = operand.channels();
int lele = operand.total();
for (int i = 0; i < operand.total(); i++) {
if (cn == 1)
vecOper.push_back(operand.at<cv::Vec<T, 1>>(i)[0]);
else if (cn == 2) {
if (typeid(T) == typeid(cf)) {
T xd = operand.at<T>(i);
vecOper.push_back(xd);
}
else
for (int k = 0; k < cn; k++)
vecOper.push_back(operand.at<cv::Vec<T, 2>>(i)[k]);
}
else if (cn == 3)
for (int k = 0; k < cn; k++)
vecOper.push_back(operand.at<cv::Vec<T,3>>(i)[k]);
}
return vecOper;
}
void zeroPadding(std::vector<cf>& a, int power) {
int p, ioper;
if (power == -1)
p = ceil(log2f(a.size()));
else
p = power;
ioper = pow(2, p);
int size = a.size();
for (int i = 0; i < ioper - size; i++) {
a.push_back(0);
}
}
template <typename T>
cv::Mat vectorToMatElementsRowMajor(std::vector<T> operand, int mrows, int mcols, int mtype) {
cv::Mat matoper(mrows, mcols, mtype);
for (int j = 0; j < matoper.total(); j++) {
matoper.at<T>(j) = operand[j];
}
return matoper;
}
#Cris I tried it again with openCV DFT like you said, following the directions here. I applied DFT to image and kernel, then element-wise multiplied them, then applied IDFT. But result is something very different now. I can see resemblance of original image in there, but there are multiple shadows of it in different angles. I think the problem is how I do signal multiplication, but I can't find any answers on how to multiply 2D signals. Here is the code, output image is below it;
cv::Mat convolution2DopenCV(cv::Mat image, cv::Mat kernel) {
cv::Mat paddedImage, paddedKernel, imgOper, kerOper;
if (image.channels() == 3)
cv::cvtColor(image, imgOper, cv::COLOR_BGR2GRAY);
else
imgOper = image.clone();
kerOper = kernel;
int m = cv::getOptimalDFTSize(imgOper.rows);
int n = cv::getOptimalDFTSize(imgOper.cols);
cv::copyMakeBorder(imgOper, paddedImage, 0, m - imgOper.rows, 0, n - imgOper.cols, cv::BORDER_CONSTANT, cv::Scalar::all(0));
cv::copyMakeBorder(kerOper, paddedKernel, 0, m - kerOper.rows, 0, n - kerOper.cols, cv::BORDER_CONSTANT, cv::Scalar::all(0));
cv::Mat planesImage[] = { cv::Mat_<float>(paddedImage), cv::Mat::zeros(paddedImage.size(), CV_32F) };
cv::Mat cmpImgMat;
cv::merge(planesImage, 2, cmpImgMat);
cv::dft(cmpImgMat, cmpImgMat);
cv::Mat planesKernel[] = { cv::Mat_<float>(paddedKernel), cv::Mat::zeros(paddedKernel.size(), CV_32F) };
cv::Mat cmpKerMat;
cv::merge(planesKernel, 2, cmpKerMat);
cv::dft(cmpKerMat, cmpKerMat);
cv::Mat resultMat = cmpImgMat.mul(cmpKerMat);
cv::Mat planes[2];
cv::idft(resultMat, resultMat);
cv::split(resultMat, planes);
cv::normalize(planes[0], planes[0], 0, 255, cv::NORM_MINMAX);
return planes[0];
}
That's everything, if there is something I'm missing, let me know.
I want to do three-channel image filtering with the help of the C++ OpenCV library. I want to do it with kernels of 3x3 matrix size, each of which is of different value. To do this, I first divided the RGB image into three channels: red, green and blue. Then I defined different kernel matrices for these three channels. Then, after processing them with the help of the filter2d function, the code threw an exception:
Unhandled exception at 0x00007FFAA150A388 in opencvTry.exe: Microsoft
C++ exception: cv::Exception at memory location 0x0000002D4CAF9660.
occurred
What is the reason I can't do it in the code below?
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <typeinfo>
#include <stdlib.h>
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("path\\color_palette.png", IMREAD_COLOR); //load image
int blue_array[159][318];
int green_array[159][318];
int red_array[159][318];
for (int i = 0; i < src.rows; i++) {
for (int j = 0; j < src.cols; j++) {
int a = int(src.at<Vec3b>(i, j).val[0]);
blue_array[i][j] = a;
//cout << blue_array[i][j] << ' ' ;
int b = int(src.at<Vec3b>(i, j).val[1]);
green_array[i][j] = b;
int c = int(src.at<Vec3b>(i, j).val[2]);
red_array[i][j] = c;
}
}
cv::Mat blue_array_mat(159, 318, CV_32S, blue_array);
cv::Mat green_array_mat(159, 318, CV_32S, green_array);
cv::Mat red_array_mat(159, 318, CV_32S, red_array);
float kernelForBlueData[9] = { 1,0,1, 2,0,-2, 1,0,-1};
cv::Mat kernelForBlue(3, 3, CV_32F, kernelForBlueData);
float kernelForGreenData[9] = { 1./16, 2./16, 1./16, 2./16, 4./16,2./16, 1./16, 2./16, 1./16 };
cv::Mat kernelForGreen(3, 3, CV_32F, kernelForGreenData);
float kernelForRedData[9] = { 1./9,1./9, 1./9, 1./9, 1./9,1./9, 1./9, 1./9,1./9 };
cv::Mat kernelForRed(3, 3, CV_32F, kernelForRedData);
//cv::filter2D(blue_array_mat, blue_array_mat, -1, kernelForBlue, Point(-1, -1), 5.0, BORDER_REPLICATE);
filter2D(blue_array_mat, blue_array_mat, 0, kernelForBlue);
imshow("filter", blue_array_mat);
waitKey(0);
return 0;
}
You’re using a constructor for cv::Mat that expects a pointer to data (e.g. int*) but you put an int** into it. This is the reason for the crash, I presume.
Why not create the cv::Mat first and then directly write data into it?
Note the OpenCV has a function that does this for you:
cv::Mat chans[3];
cv::split(src, chans);
//...
cv::filter2D(chans[2], chans[2], 0, kernelForBlue);
Here https://stackoverflow.com/a/49817506/1277317
there is an example of how to use a convolution network in OpenCV. But this example is in Python.
How to do the same in C++?
Namely, how to do this in C++:
net = cv.dnn.readNetFromTensorflow('model.pb')
net.setInput(inp.transpose(0, 3, 1, 2))
cv_out = net.forward()
?
And how to create Mat for the setInput function for an image size: 60x162x1? I use float for the data just like in the python example.
Now I have this code and it gives incorrect results:
Net net = readNet("e://xor.pb");
float x0[60][162];
for(int i=0;i<60;i++)
{
for(int j=0;j<162;j++)
{
x0[i][j]=0;
}
}
x0[5][59]=0.5;
x0[5][60]=1;
x0[5][61]=1;
x0[5][62]=0.5;
Mat aaa = cv::Mat(60,162, CV_32F, x0);
Mat inputBlob = dnn::blobFromImage(aaa, 1.0, Size(60,162));
net.setInput(inputBlob , "conv2d_input");
Mat prob = net.forward("activation_2/Softmax");
for(int i=0;i<prob.cols;i++)
{
qDebug()<<i<<prob.at<float>(0,i);
}
In openCV almost all functions are designed to work with 3D matrices. So the easiest way for me to work with CV_32F 4D matrices is to work with them directly. The following code works correctly and quickly:
Net net = readNet("e://xor.pb");
const int sizes[] = {1,1,60,162};
Mat tenz = Mat::zeros(4, sizes, CV_32F);
float* dataB=(float*)tenz.data;
int x=1;
int y=2;
dataB[y*tenz.size[2]+x]=0.5f;
x=1;
y=3;
dataB[y*tenz.size[2]+x]=1.0f;
try
{
net.setInput(tenz , "input_layer_my_input_1");
Mat prob = net.forward("output_layer_my/MatMul");
}
catch( cv::Exception& e )
{
const char* err_msg = e.what();
qDebug()<<"err_msg"<<err_msg;
}