How to send grayscale cv::Mat to gnuplot - c++

community,
Using c++, opencv and gnuplot my goal is to showcase sobel values in a video between images with Depth of Field and ones without.
I've been saving the frame as cv::Mat, converted it into grayscale, blurred it with a 3x3 kernel and applied sobel on it. I normalized the result and it showed fine in imshow. With the help of gnuplot-iostream i want to create an 3d-image similiar to this one in gnuplot examplepicture, showing the intensity, which is between 0 and 255 after normalization.
Gnuplot doesnt seem to natively support cv::Mat, so i tried couple of ways to insert it, all creating just one line and/or wrong scaling. This is the code im using to convert it into an vector, which gnuplot seems to take with no issues.
if (sobelxy.isContinuous()) {
imgvec.assign(sobelxy.datastart, sobelxy.dataend);
}
else {
for (int i = 0; i < sobelxy.rows; i++) {
imgvec.insert(imgvec.end(), sobelxy.ptr<double>(i), sobelxy.ptr<double>(i) + sobelxy.cols);
}
}
I could access pixels one by one, however this is very performanceheavy and not well suited for a video, thus im wondering if theres an option to either preprocess the vector so that gnuplot gives me correct result or use certain parameters in gnuplot to read the vector correctly.
Thank you in advance.

Related

How to use CImg functions with pixel data?

I am using Visual Studio and looking to find a useful image processing library that will take care of basic image processing functions such as rotation so that I don't have to keep coding them manually. I came across CImg and it supports this, as well as many other useful functions, along with interpolation.
However, all the examples I've seen show CImg being used by loading and using full images. I want to work with pixel data. So my loops are the typical:
for (x=0;x<width; x++)
for (y=0;y<height; y++)
I want to perform bilinear or bicubic rotation in this instance and I see CImg supports this. It provides a rotate() and get_rotate function, among others.
I can't find any examples online that show how to use this with pixel data. Ideally, I could simply pass it the pixel color, x, y, and interpolation method, and have it return the result.
Could anyone provide any helpful suggestions? If CImg is not the right library for this type of this, could anyone recommend a simple, light-weight, easy-to-use one?
Thank you!
You can copy pixel data to CImg class using iterators, and copy it back when you are done.
std::vector<uint8_t> pixels_src, pixels_dst;
size_t width, height, n_colors;
// Copy from pixel data
cimg_library::CImg<uint8_t> image(width, height, 1, n_colors);
std::copy(pixels_src.begin(), pixels_src.end(), image.begin());
// Do image processing
// Copy to pixel data
pixels_dst.resize(width * height * n_colors);
std::copy(image.begin(), image.end(), pixels_dst.begin());

sepFilter2D Opencv unexpected results

I'm trying to apply 8X8 separable mean filter on top of an image
The filter is 2D separable.
I'm converting the following code from Matlab,
Kernel = ones(n);
% for conv 2 without zeropadding
LimgLarge = padarray(Limg,[n n],'circular');
LimgKer = conv2(LimgLarge,Kernel,'same')/(n^2);
LKerCorr = LimgKer(n+1:end-n,n+1:end-n);
1st I pad the image with the filter size, than correlate 2d, and finally crop the image region.
Now, I'm trying to implement the same thing in C++ using opencv
I have loaded the image, than called the following commands:
m_kernelSize = 8;
m_kernelX = Mat::ones(m_kernelSize,1,CV_32FC1);
m_kernelX = m_kernelX / m_kernelSize;
m_kernelY = Mat::ones(1,m_kernelSize,CV_32FC1);
m_kernelY = m_kernelY / m_kernelSize;
sepFilter2D(m_logImage,m_filteredImage,m_logImage.depth(),m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
I expected to receive the same results, but I'm still getting totally different results from Matlab.
I'd rather not to pad the image , do the correlation and finally crop the image again, I expected the same results using BORDER_REPLICATE argument.
Incidentally, I'm aware of copyMakeBorder function, but rather not use it, because sepFilter2D handles the regions by itself.
Since you said you are only loading the image before the code snippet you showed, I can see two potential flaws.
First, if you do nothing between the loading of the source image and your code snippet, then your source image would be an 8-bit image and, since you set the function argument ddepth to m_logImage.depth(), you are also requesting a 8-bit destination image.
However, after reading the documentation of sepFilter2D, I am not sure that this is a valid combination of src.depth() and ddepth.
Can you try using the following line:
sepFilter2D(m_logImage,m_filteredImage,CV_32F,m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
Second, check that you loaded your source image using the flag CV_LOAD_IMAGE_GRAYSCALE, so that it only has one channel and not three.
I followed Matlab line by line, The mistake was somewhere else.
Anyways , The following two methods return the same results
Using a 8X8 filter
// Big filter mode - now used only for debug mode
m_kernel = Mat::ones(m_kernelSize,m_kernelSize,type);
cv::Mat LimgLarge(m_logImage.rows + m_kernelSize*2, m_logImage.cols + m_kernelSize*2,m_logImage.depth());
cv::copyMakeBorder(m_logImage, LimgLarge, m_kernelSize, m_kernelSize,
m_kernelSize, m_kernelSize, BORDER_REPLICATE);
// Big filter
filter2D(LimgLarge,m_filteredImage,LimgLarge.depth(),m_kernel,Point(-1,-1),0,BORDER_CONSTANT );
m_filteredImage = m_filteredImage / (m_kernelSize*m_kernelSize);
cv::Rect roi(cv::Point(0+m_kernelSize,0+m_kernelSize),cv::Point(m_filteredImage.cols-m_kernelSize, m_filteredImage.rows-m_kernelSize));
cv::Mat croppedImage = m_filteredImage(roi);
m_diffImage = m_logImage - croppedImage;
Second method, using separable 8x8 filter
sepFilter2D(m_logImage,m_filteredImage,m_logImage.depth(),m_kernelX,m_kernelY,Point(-1,-1),0,BORDER_REPLICATE);
m_filteredImage = m_filteredImage / (m_kernelSize*m_kernelSize);

Laser line detection using OpenCV

I'm working on a project in which I need to detect a red laser line in an image. This is the strategy I have in mind.
Separate the R, G, B channels in the image.
Threshold the images at a high intensity value.
Using the 3 binary images generated, perform the element wise operation r && !g && !b. (&& is logical AND, ! is logical NOT).
The resulting matrix is a binary image with 1 on the regions where the laser was present.
This worked with a few test images on Matlab. But my problem is that this needs to be implemented using OpenCV in C/ C++.
I've tried going through most of the library functions, but there seems no intuitive/ simple way of working with binary images and performing logical operations on them.
Can someone please point me to the OpenCV functions/ methods that you think I might find useful? I've figured that cvThresholdImage can be used for thresholding, but that's pretty much about it.
So you already figured out steps 1 and 2 in openCV then? If you are just trying to use the logical operators, openCV gives you access to the raw data which you can then operate on with logical operators. Assuming you have already split into three channels and thresholded
//three binary images in the format you specified above
cv::Mat g;
cv::Mat b;
cv::Mat r;
uchar* gptr = g.data();
uchar* bptr = b.data();
uchar* rptr = r.data();
//assuming the matrix data is continuous you can just iterate straight through the data
if(g.isContinuous()&&r.isContinuous()&&b.isContinuous())
{
for(int i = 0; i < g.rows*g.cols; i++)
{
rptr[i] = rptr[i]&&!bptr[i]&&!gptr[i];
}
}
r now contains the output you described. You could also copy it into a new matrix if you don't want to overwrite r.
There are several ways of iterating through a cv::Mat and accessing all the data points and C++ provides all the logical operators you could want. To my knowledge openCV does not provide matrix logical operator functions but you could write your own very easily as shown above.
Edit
As suggested by QuentinGeissmann you could accomplish the same thing using the bitwise_not and bitwise_and functions. I was not aware that they existed. I suspect that using them would be slower because of the number of times the data must be iterated through but it could be done in less code.
cv::bitwise_not(g,g);
cv::bitwise_not(b,b);
cv::bitwise_and(b,g,b);
cv::bitwise_and(r,b,r);
//r now contains r&&!b&&!g

OpenCV, how to use arrays of points for smoothing and sampling contours?

I have a problem to get my head around smoothing and sampling contours in OpenCV (C++ API).
Lets say I have got sequence of points retrieved from cv::findContours (for instance applied on this this image:
Ultimately, I want
To smooth a sequence of points using different kernels.
To resize the sequence using different types of interpolations.
After smoothing, I hope to have a result like :
I also considered drawing my contour in a cv::Mat, filtering the Mat (using blur or morphological operations) and re-finding the contours, but is slow and suboptimal. So, ideally, I could do the job using exclusively the point sequence.
I read a few posts on it and naively thought that I could simply convert a std::vector(of cv::Point) to a cv::Mat and then OpenCV functions like blur/resize would do the job for me... but they did not.
Here is what I tried:
int main( int argc, char** argv ){
cv::Mat conv,ori;
ori=cv::imread(argv[1]);
ori.copyTo(conv);
cv::cvtColor(ori,ori,CV_BGR2GRAY);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i > hierarchy;
cv::findContours(ori, contours,hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
for(int k=0;k<100;k += 2){
cv::Mat smoothCont;
smoothCont = cv::Mat(contours[0]);
std::cout<<smoothCont.rows<<"\t"<<smoothCont.cols<<std::endl;
/* Try smoothing: no modification of the array*/
// cv::GaussianBlur(smoothCont, smoothCont, cv::Size(k+1,1),k);
/* Try sampling: "Assertion failed (func != 0) in resize"*/
// cv::resize(smoothCont,smoothCont,cv::Size(0,0),1,1);
std::vector<std::vector<cv::Point> > v(1);
smoothCont.copyTo(v[0]);
cv::drawContours(conv,v,0,cv::Scalar(255,0,0),2,CV_AA);
std::cout<<k<<std::endl;
cv::imshow("conv", conv);
cv::waitKey();
}
return 1;
}
Could anyone explain how to do this ?
In addition, since I am likely to work with much smaller contours, I was wondering how this approach would deal with border effect (e.g. when smoothing, since contours are circular, the last elements of a sequence must be used to calculate the new value of the first elements...)
Thank you very much for your advices,
Edit:
I also tried cv::approxPolyDP() but, as you can see, it tends to preserve extremal points (which I want to remove):
Epsilon=0
Epsilon=6
Epsilon=12
Epsilon=24
Edit 2:
As suggested by Ben, it seems that cv::GaussianBlur() is not supported but cv::blur() is. It looks very much closer to my expectation. Here are my results using it:
k=13
k=53
k=103
To get around the border effect, I did:
cv::copyMakeBorder(smoothCont,smoothCont, (k-1)/2,(k-1)/2 ,0, 0, cv::BORDER_WRAP);
cv::blur(smoothCont, result, cv::Size(1,k),cv::Point(-1,-1));
result.rowRange(cv::Range((k-1)/2,1+result.rows-(k-1)/2)).copyTo(v[0]);
I am still looking for solutions to interpolate/sample my contour.
Your Gaussian blurring doesn't work because you're blurring in column direction, but there is only one column. Using GaussianBlur() leads to a "feature not implemented" error in OpenCV when trying to copy the vector back to a cv::Mat (that's probably why you have this strange resize() in your code), but everything works fine using cv::blur(), no need to resize(). Try Size(0,41) for example. Using cv::BORDER_WRAP for the border issue doesn't seem to work either, but here is another thread of someone who found a workaround for that.
Oh... one more thing: you said that your contours are likely to be much smaller. Smoothing your contour that way will shrink it. The extreme case is k = size_of_contour, which results in a single point. So don't choose your k too big.
Another possibility is to use the algorithm openFrameworks uses:
https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/graphics/ofPolyline.cpp#L416-459
It traverses the contour and essentially applies a low-pass filter using the points around it. Should do exactly what you want with low overhead and (there's no reason to do a big filter on an image that's essentially just a contour).
How about approxPolyDP()?
It uses this algorithm to 'smooth' a contour (basically gettig rid of most of the contour's points and leave the ones that represent a good approximation of your contour)
From 2.1 OpenCV doc section Basic Structures:
template<typename T>
explicit Mat::Mat(const vector<T>& vec, bool copyData=false)
You probably want to set 2nd param to true in:
smoothCont = cv::Mat(contours[0]);
and try again (this way cv::GaussianBlur should be able to modify the data).
I know this was written a long time ago, but did you tried a big erode followed by a big dilate (opening), and then find the countours? It looks like a simple and fast solution, but I think it could work, at least to some degree.
Basically the sudden changes in contour corresponds to high frequency content. An easy way to smooth your contour would be to find the fourier coefficients assuming the coordinates form a complex plane x + iy and then by eliminating the high frequency coefficients.
My take ... many years later ...!
Maybe two easy ways to do it:
loop a few times with dilate,blur,erode. And find the contours on that updated shape. I found 6-7 times gives good results.
create a bounding box of the contour, and draw an ellipse inside the bounded rectangle.
Adding the visual results below:
This applies to me. The edges are smoother than before:
medianBlur(mat, mat, 7)
morphologyEx(mat, mat, MORPH_OPEN, getStructuringElement(MORPH_RECT, Size(12.0, 12.0)))
val contours = getContours(mat)
This is opencv4android code.

Inconsistent outcome of findChessboardCorners() in opencv

I am writing C++ code with OpenCV where I'm trying to detect a chessboard on an image (loaded from a .jpg file) to warp the perspective of the image. When the chessboard is found by findChessboardCorners(), the rest of my code is working perfectly. But sometimes the function does not detect the pattern, and this behavior seems to be random.
For example, there is one image that works on it's original resolution 2560x1920, but not if I scale it down with GIMP first to 800x600. However, another image seems to do the opposite: doesn't work in original resolution, but does work scaled down.
Here's the bit of my code that does the detection:
Mat grayimg = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
if (img.data == NULL) {
printf("Unable to read image");
return 0;
}
bool patternfound = findChessboardCorners(grayimg, patternsize, corners,
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK);
if (!patternfound) {
printf("Chessboard not found");
return 0;
}
Is there some kind of bug in opencv causing this behavior? Does anyone has any tips on how to pre-process your image, so the function will work more consistently?
I already tried playing around with the parameters CALIB_CB_ADAPTIVE_THRESH, CALIB_CB_NORMALIZE_IMAGE, CALIB_CB_FILTER_QUADS and CALIB_CB_FAST_CHECK. I'm also having the same results when I pass in a color image.
Thanks in advance
EDIT: I'm using OpenCV version 2.4.1
I had a very hard time getting findChessboardCorners to work until I added a white boarder around the chessboard.
I found that as hint somewhere in the more recent documenation.
Before adding the border, it would sometimes be impossible to recognize the keyboard, but with the white border it works every time.
Welcome to the joys of real-world computer vision :-)
You don't post any images, and findChessboardCorners is a bit too high-level to debug. I suggest to display (in octave, or matlab, or with more OpenCV code) the location of the detected corners on top of the image, to see if enough are detected. If none, try to run cvCornerHarris by itself on the image.
Sometimes the cause of the problem is the excessive graininess of the image: try to blur is just a little and see if it helps.
Actually, try to remove the CALIB_CB_FAST_CHECK option, and give it a try.
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK is not the same as CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_FAST_CHECK, you should use | (binary or)