splitting image results in unhandled exception error - c++

I am currently planning on splitting my image into 3 channels so i can get the RGB values of an image to plot a scatter graph so i can model is using a normal distribtion calculating the covariance matrix, mean, etc.
then calculate distance between the background points and the actual image to segment the image.
Now in my first task, i have wrote the following code.
VideoCapture cam(0);
//int id=0;
Mat image, Rch,Gch,Bch;
vector<Mat> rgb(3); //RGB is a vector of 3 matrices
namedWindow("window");
while(1)
{
cam>>image;
split(image,rgb);
Bch = rgb[0];
Gch = rgb[1];
Rch = rgb[2];
but as soon as it reaches the split function, i step through it, it causes a unhandled exception error. access violation writing location 0xfeeefeee
i am still new to opencv, so am not used to dealing with unhandled exception error.
thanks

It sounds as if split expects there to be three instances of Mat in the rgb vector.
But you have only prepared it to hold three items - it is actually empty.
Try adding three items to the vector and run again.

Although this is an old issue I would like to share the solution that worked for me. Instead of vector<Mat> rgb(3); I used Mat channels[3];. I realized there is something wrong with using vector when I was not able to use split even on an image loaded with imread. Unfortunately, I cannot explain why this change works, but if someone can that would be great.

Related

Change columns order in opencv mat without copying data

I have an opencv mat which has M rows and 3 columns, is there a way to reorder the mat such that the first and last (i.e., third) columns are switched while the middle column kept in place without copying the data?
OpenCV data is an array of pixels. Sometimes you can get a column or a rectangle view of an image (like with col() ). In which the data is not continuous, and it is calculated, as far as I know, with a step between rows. However the data is shared and is still an array.
Then the question becomes: can I swap two portions of an array without copying the data? Not as far as I know.
You can use optimized functions of OpenCV to swap them, but the data will be copied.
Also, non continuous data is way slower than continuous data in OpenCV functions. More can be read here.
You can use the OpenCV function flip for that. As an example the following code flips an image about the mid column.
int main ()
{
Mat img, flipped; //your mat
img =imread("lena.jpg");
flip(img,flipped ,1); // flipped is the output
imshow("img",flipped);
waitKey(0);
}

dlib-19.1: Initialize dlib::matrix from image (e.g. dlib::cv_image) for DNN training

I am currently trying to train a DNN with images I have on file (OCR context... input images per class are aggregate images of several thousand fixed size tiny images).
I have some code to open and properly segment the aggregate images into small OpenCV cv::Mat's. My problem is, there does not seem to be a way to
train the DNN on dlib::cv_image directly (which can be wrapped around cv::Mat; I'm getting 500+ lines of compiler errors) or
easily convert/wrap cv::Mat to dlib::matrix without copying every element
I'm pretty sure I'm missing something here, any pointers would be greatly appreciated.
Note: The only variant I got to compile was calling dlib::dnn_trainer::train() with a vector of dlib::matrix (size fixed at compile time) and a vector with unsigned long labels (unsigned labels did not compile), although train() is templated on both types. Any pointers?
You don't have to fix the size of dlib::matrix at compile time. Just call set_size() on it. See also http://dlib.net/faq.html#HowdoIsetthesizeofamatrixatruntime.
Also, if you want to use something other than a dlib::matrix as input you can do that. You just have to define your own input layer. The interface you must implement is fully documented here: http://dlib.net/dlib/dnn/input_abstract.h.html#EXAMPLE_INPUT_LAYER. You could also look at the existing input layers for examples. But be sure to read the documentation as it will answer questions you are likely to have.
Dlib has an amazing function for this task: http://dlib.net/imaging.html#assign_image, but it makes copying of each element
here is sample code on how it can be used:
// mat should be greyscale image (8UC1)
void cv_to_dlib_float_matrix(const cv::Mat& mat, dlib::matrix<float>& res)
{
cv::Mat tmp(mat.cols, mat.rows, CV_32FC1);
cv::normalize(mat, tmp, 0.0, 1.0, cv::NORM_MINMAX, CV_32FC1);
dlib::assign_image(res, dlib::cv_image<float>(tmp));
}

output from calcOpticalFlowPyrLK function in openCV

Im new to openCV and image processing I am trying to find moving objects using optical flow (Lucas-Kanade method) by comparing two saved images on disk which are frames from a camera.
The part of code I am asking about is:
Mat img = imread("t_mono1.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the image
Mat img2 = imread("t_mono2.JPG", CV_LOAD_IMAGE_UNCHANGED);//read the 2nd image
Mat pts;
Mat pts2;
Mat stat;
Mat err;
goodFeaturesToTrack(img,pts,100,0.01,0.01);//extract features to track
calcOpticalFlowPyrLK(img,img2,pts,pts2,stat,err);//optical flow
cout<<" "<<pts.cols<<"\n"<<pts.rows<<"\n";`
I am getting that the size of pts is 100 by 1. I suppose it should have been 100 by 2 since the pixels have x and y coordinates. Further, when I used for loops to display the contents of the arrays the pts array was all zeros and all arrays were one dimensional.
I have seen this question:
OpenCV's calcOpticalFlowPyrLK throws exception
I tried using vectors as he did but I get an error when building it says cannot convert between different data types. I am using VS2008 with openCV2.4.11
I would like to get the (x,y) coordinates of features in the first and second images and the error but all the arrays passed to calcOpticalFlowPyrLK were one dimensional and I don't understand how can this be?

sepFilter2D Opencv unexpected results

I'm trying to apply 8X8 separable mean filter on top of an image
The filter is 2D separable.
I'm converting the following code from Matlab,
Kernel = ones(n);
% for conv 2 without zeropadding
LimgLarge = padarray(Limg,[n n],'circular');
LimgKer = conv2(LimgLarge,Kernel,'same')/(n^2);
LKerCorr = LimgKer(n+1:end-n,n+1:end-n);
1st I pad the image with the filter size, than correlate 2d, and finally crop the image region.
Now, I'm trying to implement the same thing in C++ using opencv
I have loaded the image, than called the following commands:
m_kernelSize = 8;
m_kernelX = Mat::ones(m_kernelSize,1,CV_32FC1);
m_kernelX = m_kernelX / m_kernelSize;
m_kernelY = Mat::ones(1,m_kernelSize,CV_32FC1);
m_kernelY = m_kernelY / m_kernelSize;
sepFilter2D(m_logImage,m_filteredImage,m_logImage.depth(),m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
I expected to receive the same results, but I'm still getting totally different results from Matlab.
I'd rather not to pad the image , do the correlation and finally crop the image again, I expected the same results using BORDER_REPLICATE argument.
Incidentally, I'm aware of copyMakeBorder function, but rather not use it, because sepFilter2D handles the regions by itself.
Since you said you are only loading the image before the code snippet you showed, I can see two potential flaws.
First, if you do nothing between the loading of the source image and your code snippet, then your source image would be an 8-bit image and, since you set the function argument ddepth to m_logImage.depth(), you are also requesting a 8-bit destination image.
However, after reading the documentation of sepFilter2D, I am not sure that this is a valid combination of src.depth() and ddepth.
Can you try using the following line:
sepFilter2D(m_logImage,m_filteredImage,CV_32F,m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
Second, check that you loaded your source image using the flag CV_LOAD_IMAGE_GRAYSCALE, so that it only has one channel and not three.
I followed Matlab line by line, The mistake was somewhere else.
Anyways , The following two methods return the same results
Using a 8X8 filter
// Big filter mode - now used only for debug mode
m_kernel = Mat::ones(m_kernelSize,m_kernelSize,type);
cv::Mat LimgLarge(m_logImage.rows + m_kernelSize*2, m_logImage.cols + m_kernelSize*2,m_logImage.depth());
cv::copyMakeBorder(m_logImage, LimgLarge, m_kernelSize, m_kernelSize,
m_kernelSize, m_kernelSize, BORDER_REPLICATE);
// Big filter
filter2D(LimgLarge,m_filteredImage,LimgLarge.depth(),m_kernel,Point(-1,-1),0,BORDER_CONSTANT );
m_filteredImage = m_filteredImage / (m_kernelSize*m_kernelSize);
cv::Rect roi(cv::Point(0+m_kernelSize,0+m_kernelSize),cv::Point(m_filteredImage.cols-m_kernelSize, m_filteredImage.rows-m_kernelSize));
cv::Mat croppedImage = m_filteredImage(roi);
m_diffImage = m_logImage - croppedImage;
Second method, using separable 8x8 filter
sepFilter2D(m_logImage,m_filteredImage,m_logImage.depth(),m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
m_filteredImage = m_filteredImage / (m_kernelSize*m_kernelSize);

weird behaviour saving image in opencv

After doing some opencv operation, I initialize a new image that I'd like to use. Saving this empty image gives a weird result
The lines I use to save this image are:
Mat dst2 (Size (320, 240), CV_8UC3);
imwrite("bla.jpg", dst2);
I should get a black image, but this is what I get. Moving these two lines to the start of the program everything wordks fine
Anyone had this problem before?
I just noticed that these white lines contain portions from other images I'm processing in the same program
Regards
Because you did not initialize the image with any values, you just defined the size and type, you will get random pixels (or not so random, it is probably showing pieces of pixels in memory).
It is the same concept of using/accessing an uninitialized variable.
To paint the image black you can use Mat::setTo, docs here:
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-setto