Use OpenCV Threshold with Kinect Image - c++

I'm trying to use the OpenCV Threshold with the depthImage retrieved by the OpenCV VideoCapture module, but I get the following error:
OpenCV Error: Bad argument in unknown function,
file PATHTOOPENCV\opencv\modules\core\src\matrix.cpp line 646
My code is as follows:
#include "opencv2/opencv.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/gpu/gpu.hpp"
cv::VideoCapture kinect;
cv::Mat rgbMap;
cv::Mat dispMap;
bool newFrame;
void setup()
{
kinect.open(CV_CAP_OPENNI);
newFrame = false;
}
void update()
{
if(kinect.grab())
{
kinect.retrieve( rgbMap, CV_CAP_OPENNI_BGR_IMAGE);
kinect.retrieve( dispMap, CV_CAP_OPENNI_DISPARITY_MAP );
newFrame = true;
}
}
void draw()
{
if(newFrame)
{
cv::Mat * _thresSrc = new cv::Mat(dispMap);
cv::Mat * _thresDst = new cv::Mat(dispMap);
cvThreshold(_thresSrc, _thresDst, 24, 255, CV_THRESH_BINARY);
// Draw _thresDst;
delete _thresSrc;
delete _thresDst;
newFrame = false;
}
}
Thank you very much for your help

To start things off, you are mixing the C interface with the C++ interface, and they shouldn't be mixed together!
cv::Mat belongs to the C++ interface, and cvThreshold() belongs to the C. You should use cv::threshold() instead:
double cv::threshold(const Mat& src, Mat& dst, double thresh, double maxVal, int thresholdType)
Parameters:
src – Source array (single-channel, 8-bit of 32-bit floating point)
dst – Destination array; will have the same size and the same type as src
thresh – Threshold value
maxVal – Maximum value to use with THRESH_BINARY and THRESH_BINARY_INV thresholding types
thresholdType – Thresholding type (see the discussion)

Related

Copy Mat in opencv

I try to copy a image to other image using opencv, but I got a problem. Two image is not the same, like this:
This is the code I used:
#include <opencv2\opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <cmath>
#include <iostream>
#include <opencv2\opencv.hpp>
int main()
{
cv::Mat inImg = cv::imread("C:\\Users\\DUY\\Desktop\\basic_shapes.png");
//Data point copy
unsigned char * pData = inImg.data;
int width = inImg.rows;
int height = inImg.cols;
cv::Mat outImg(width, height, CV_8UC1);
//data copy using memcpy function
memcpy(outImg.data, pData, sizeof(unsigned char)*width*height);
//processing and copy check
cv::namedWindow("Test");
imshow("Test", inImg);
cv::namedWindow("Test2");
imshow("Test2", outImg);
cvWaitKey(0);
}
Simply use .clone() function of cv::Mat:
cv::Mat source = cv::imread("basic_shapes.png");
cv::Mat dst = source.clone();
This will do the trick.
You are making an image with one channel only (which means only shades of gray are possible) with CV_8UC1, you could use CV_8UC3 or CV_8UC4 but for simply copying stick with the clone function.
You actually don't want to copy the data, since you start with a RGB CV_8UC3 image, and you want to work on a grayscale CV_8UC1 image.
You should use cvtColor, that will convert your RGB data into grayscale.
#include <opencv2\opencv.hpp>
#include <iostream>
using namespace cv;
int main()
{
Mat inImg = cv::imread("C:\\Users\\DUY\\Desktop\\basic_shapes.png"); // inImg is CV_8UC3
Mat outImg;
cvtColor(inImg, outImg, COLOR_RGB2GRAY); // Now outImg is CV_8UC1
//processing and copy check
imshow("Test", inImg);
imshow("Test2", outImg);
waitKey();
}
With a simple memcopy you're copying a sequence of uchar like this:
BGR BGR BGR BGR ...
into an image that expects them to be (G for gray):
G G G G ...
and that's is causing your outImg to be uncorrect.
Your code will be correct if you define outImage like:
cv::Mat outImg(width, height, CV_8UC3); // Instead of CV_8UC1
the best way is to use the opencv clone method:
cv::Mat outImg = inImg.clone();
Your original image is in color. cv::Mat outImg(width, height, CV_8UC1); says that your new image is of data type CV_8UC1 which is an 8-bit grayscale image. So you know that is not correct. Then you try to copy the amount of data from the original image to the new image that corresponds to total pixels * 8-bits which is at best 1/3 of the actual image (assuming the original image was 3 color, 8-bits per color, aka a 24-bit image) and perhaps even 1/4 (if it had an alpha channel, making it 4 channels of 8-bits or a 32-bit image).
TLDR: you're matrices aren't the same type, and you are making assumptions about the size of the data to be copied off of an incorrect, and incorrectly sized type.
Here is a simple code to copy image.
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <cmath>
int main()
{
cv::Mat inImg = cv::imread("1.jpg");
cv::Mat outImg = inImg.clone();
cv::namedWindow("Test");
imshow("Test", inImg);
cv::namedWindow("Test2");
imshow("Test2", outImg);
cvWaitKey(0);
}
Mat source = imread("1.png", 0);
Mat dest;
source.copyTo(dest);

OpenCV SIFT key points extraction isuue

I tried to extract SIFT key points. It is working fine for a sample image I downloaded (height 400px width 247px horizontal and vertical resolutions 300dpi). Below image shows the extracted points.
Then I tried to apply the same code to a image that was taken and edited by me (height 443px width 541px horizontal and vertical resolutions 72dpi).
To create the above image I rotated the original image then removed its background and resized it using Photoshop, but my code, for that image doesn't extract features like in the first image.
See the result :
It just extract very few points. I expect a result as in the first case.
For the second case when I'm using the original image without any edit the program gives points as the first case.
Here is the simple code I have used
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv2\nonfree\nonfree.hpp>
using namespace cv;
int main(){
Mat src, descriptors,dest;
vector<KeyPoint> keypoints;
src = imread(". . .");
cvtColor(src, src, CV_BGR2GRAY);
SIFT sift;
sift(src, src, keypoints, descriptors, false);
drawKeypoints(src, keypoints, dest);
imshow("Sift", dest);
cvWaitKey(0);
return 0;
}
What I'm doing wrong here? what do I need to do to get a result like in the first case to my own image after resizing ?
Thank you!
Try set nfeatures parameter (may be other parameters also need adjustment) in SIFT constructor.
Here is constructor definition from reference:
SIFT::SIFT(int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)
Your code will be:
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<opencv2\nonfree\nonfree.hpp>
using namespace cv;
using namespace std;
int main(){
Mat src, descriptors,dest;
vector<KeyPoint> keypoints;
src = imread("D:\\ImagesForTest\\leaf.jpg");
cvtColor(src, src, CV_BGR2GRAY);
SIFT sift(2000,3,0.004);
sift(src, src, keypoints, descriptors, false);
drawKeypoints(src, keypoints, dest);
imshow("Sift", dest);
cvWaitKey(0);
return 0;
}
The result:
Dense sampling example:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include "opencv2/nonfree/nonfree.hpp"
int main(int argc, char* argv[])
{
cv::initModule_nonfree();
cv::namedWindow("result");
cv::Mat bgr_img = cv::imread("D:\\ImagesForTest\\lena.jpg");
if (bgr_img.empty())
{
exit(EXIT_FAILURE);
}
cv::Mat gray_img;
cv::cvtColor(bgr_img, gray_img, cv::COLOR_BGR2GRAY);
cv::normalize(gray_img, gray_img, 0, 255, cv::NORM_MINMAX);
cv::DenseFeatureDetector detector(12.0f, 1, 0.1f, 10);
std::vector<cv::KeyPoint> keypoints;
detector.detect(gray_img, keypoints);
std::vector<cv::KeyPoint>::iterator itk;
for (itk = keypoints.begin(); itk != keypoints.end(); ++itk)
{
std::cout << itk->pt << std::endl;
cv::circle(bgr_img, itk->pt, itk->size, cv::Scalar(0,255,255), 1, CV_AA);
cv::circle(bgr_img, itk->pt, 1, cv::Scalar(0,255,0), -1);
}
cv::Ptr<cv::DescriptorExtractor> descriptorExtractor = cv::DescriptorExtractor::create("SURF");
cv::Mat descriptors;
descriptorExtractor->compute( gray_img, keypoints, descriptors);
// SIFT returns large negative values when it goes off the edge of the image.
descriptors.setTo(0, descriptors<0);
imshow("result",bgr_img);
cv::waitKey();
return 0;
}
The result:

Output producing 4 images side by side for single image provided in gradient calculation

Following code is used to calculate the normalized gradient at all the pixels of image. But on using imshow on calculated gradient, instead of showing gradient for provided image its showing gradient of provided image 4 times (side by side).
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/core/core.hpp>
#include<iostream>
#include<math.h>
using namespace cv;
using namespace std;
Mat mat2gray(const Mat& src)
{
Mat dst;
normalize(src, dst, 0.0, 1.0, NORM_MINMAX);
return dst;
}
Mat setImage(Mat srcImage){
//GaussianBlur(srcImage,srcImage,Size(3,3),0.5,0.5);
Mat avgImage = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat gradient = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat norMagnitude = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
Mat orientation = Mat::zeros(srcImage.rows,srcImage.cols,CV_32F);
//Mat_<uchar> srcImagetemp = srcImage;
float dx,dy;
for(int i=0;i<srcImage.rows-1;i++){
for(int j=0;j<srcImage.cols-1;j++){
dx=srcImage.at<float>(i,j+1)-srcImage.at<float>(i,j);
dy=srcImage.at<float>(i+1,j)-srcImage.at<float>(i,j);
gradient.at<float>(i,j)=sqrt(dx*dx+dy*dy);
orientation.at<float>(i,j)=atan2(dy,dx);
//cout<<gradient.at<float>(i,j)<<endl;
}
}
GaussianBlur(gradient,avgImage,Size(7,7),3,3);
for(int i=0;i<srcImage.rows;i++){
for(int j=0;j<srcImage.cols;j++){
norMagnitude.at<float>(i,j)=gradient.at<float>(i,j)/max(avgImage.at<float>(i,j),float(4));
//cout<<norMagnitude.at<float>(i,j)<<endl;
}
}
imshow("b",(gradient));
waitKey();
return norMagnitude;
}
int main(int argc,char **argv){
Mat image=imread(argv[1]);
cvtColor( image,image, CV_BGR2GRAY );
Mat newImage=setImage(image);
imshow("a",(newImage));
waitKey();
}
Your incoming source image is of type CV_8UC1, and yet you read it as floats:
dx=srcImage.at<float>(i,j+1)-srcImage.at<float>(i,j);
dy=srcImage.at<float>(i+1,j)-srcImage.at<float>(i,j);
If running under the debugger, this should have thrown an assertion, which would have highlighted the problem.
Try changing those lines to use unsigned char as follows:
dx=(float)(srcImage.at<unsigned char>(i,j+1)-srcImage.at<unsigned char>(i,j));
dy=(float)(srcImage.at<unsigned char>(i+1,j)-srcImage.at<unsigned char>(i,j));

OpenCV C++ code to Obj-C++

I was suggested an algorithm to prepare an image for OCR, the code giving to me is great! However it is not compatible with the iOS build of OpenCV it seems, there are a few different naming conventions and I am having a hard time converting the code to Obj-C++.
Could someone rewrite it for Obj-C++?
Here is the original code:
#include <iostream>
#include <vector>
#include <stdio.h>
#include <stdarg.h>
#include "opencv2/opencv.hpp"
#include "fstream"
#include "iostream"
using namespace std;
using namespace cv;
//-----------------------------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------------------------
void CalcBlockMeanVariance(Mat& Img,Mat& Res,float blockSide=21) // blockSide - the parameter (set greater for larger font on image)
{
Mat I;
Img.convertTo(I,CV_32FC1);
Res=Mat::zeros(Img.rows/blockSide,Img.cols/blockSide,CV_32FC1);
Mat inpaintmask;
Mat patch;
Mat smallImg;
Scalar m,s;
for(int i=0;i<Img.rows-blockSide;i+=blockSide)
{
for (int j=0;j<Img.cols-blockSide;j+=blockSide)
{
patch=I(Range(i,i+blockSide+1),Range(j,j+blockSide+1));
cv::meanStdDev(patch,m,s);
if(s[0]>0.01) // Thresholding parameter (set smaller for lower contrast image)
{
Res.at<float>(i/blockSide,j/blockSide)=m[0];
}else
{
Res.at<float>(i/blockSide,j/blockSide)=0;
}
}
}
cv::resize(I,smallImg,Res.size());
cv::threshold(Res,inpaintmask,0.02,1.0,cv::THRESH_BINARY);
Mat inpainted;
smallImg.convertTo(smallImg,CV_8UC1,255);
inpaintmask.convertTo(inpaintmask,CV_8UC1);
inpaint(smallImg, inpaintmask, inpainted, 5, INPAINT_TELEA);
cv::resize(inpainted,Res,Img.size());
Res.convertTo(Res,CV_32FC1,1.0/255.0);
}
//-----------------------------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------------------------
int main( int argc, char** argv )
{
namedWindow("Img");
namedWindow("Edges");
//Mat Img=imread("D:\\ImagesForTest\\BookPage.JPG",0);
Mat Img=imread("Test2.JPG",0);
Mat res;
Img.convertTo(Img,CV_32FC1,1.0/255.0);
CalcBlockMeanVariance(Img,res);
res=1.0-res;
res=Img+res;
imshow("Img",Img);
cv::threshold(res,res,0.85,1,cv::THRESH_BINARY);
cv::resize(res,res,cv::Size(res.cols/2,res.rows/2));
imwrite("result.jpg",res*255);
imshow("Edges",res);
waitKey(0);
return 0;
}
My attempt: (gives back blue image with black spots)
void CalcBlockMeanVariance(cv::Mat& Img,cv::Mat& Res,float blockSide=21) // blockSide - the parameter (set greater for larger font on image)
{
cv::Mat I;
Img.convertTo(I,CV_32FC1);
Res=cv::Mat::zeros(Img.rows/blockSide,Img.cols/blockSide,CV_32FC1);
cv::Mat inpaintmask;
cv::Mat patch;
cv::Mat smallImg;
cv::Scalar m,s;
for(int i=0;i<Img.rows-blockSide;i+=blockSide)
{
for (int j=0;j<Img.cols-blockSide;j+=blockSide)
{
patch=I(cv::Rect(j,i,blockSide,blockSide));
cv::meanStdDev(patch,m,s);
if(s[0]>0.01) // Thresholding parameter (set smaller for lower contrast image)
{
Res.at<float>(i/blockSide,j/blockSide)=m[0];
}else
{
Res.at<float>(i/blockSide,j/blockSide)=0;
}
}
}
cv::resize(I,smallImg,Res.size());
cv::threshold(Res,inpaintmask,0.02,1.0,CV_THRESH_BINARY);
cv::Mat inpainted;
smallImg.convertTo(smallImg,CV_8UC1,255);
inpaintmask.convertTo(inpaintmask,CV_8UC1);
inpaint(smallImg, inpaintmask, inpainted, 5, CV_INPAINT_TELEA);
cv::resize(inpainted,Res,Img.size());
Res.convertTo(Res,CV_32FC1,1.0/255.0);
}
Calling the method.
_img = [self cvMatFromUIImage:_endImage];
cv::cvtColor(_img, _img, CV_RGB2GRAY);
_img.convertTo(_img, CV_32FC1, 1.0/255.0);
CalcBlockMeanVariance(_img, _res);
_res = 1.0 - _res;
_res = _img + _res;
cv::threshold(_res,_res,0.85,1,cv::THRESH_BINARY);
cv::resize(_res,_res,cv::Size(_res.cols/2,_res.rows/2));
_endImage = [self UIImageFromMat:_res];
You can simply separate your c++ code in an another NSObject class and rename the filename to OpenCVUtilities.mm from OpenCVUtilities.m. mm suggests Objective -C should use C++ complier for the class and you don't need to convert the code to Objective-C for that, it will work as it is.
And after doing this you will need to change few settings in your Project -> Build Settings as shown in the image below
For more assitance you can download the project from here
Cheers.

watershed segmentation opencv xcode

I am now learning a code from the opencv codebook (OpenCV 2 Computer Vision Application Programming Cookbook): Chapter 5, Segmenting images using watersheds, page 131.
Here is my main code:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter {
private:
cv::Mat markers;
public:
void setMarkers(const cv::Mat& markerImage){
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(const cv::Mat &image){
cv::watershed(image,markers);
return markers;
}
};
int main ()
{
cv::Mat image = cv::imread("/Users/yaozhongsong/Pictures/IMG_1648.JPG");
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),6);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),6);
cv::threshold(bg,bg,1,128,cv::THRESH_BINARY_INV);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
// Create watershed segmentation object
WatershedSegmenter segmenter;
// Set markers and process
segmenter.setMarkers(markers);
segmenter.process(image);
imshow("a",image);
std::cout<<".";
cv::waitKey(0);
}
However, it doesn't work. How could I initialize a binary image? And how could I make this segmentation code work?
I am not very clear about this part of the book.
Thanks in advance!
There's a couple of things that should be mentioned about your code:
Watershed expects the input and the output image to have the same size;
You probably want to get rid of the const parameters in the methods;
Notice that the result of watershed is actually markers and not image as your code suggests; About that, you need to grab the return of process()!
This is your code, with the fixes above:
// Usage: ./app input.jpg
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter{
private:
cv::Mat markers;
public:
void setMarkers(cv::Mat& markerImage)
{
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(cv::Mat &image)
{
cv::watershed(image, markers);
markers.convertTo(markers,CV_8U);
return markers;
}
};
int main(int argc, char* argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat binary;// = cv::imread(argv[2], 0);
cv::cvtColor(image, binary, CV_BGR2GRAY);
cv::threshold(binary, binary, 100, 255, THRESH_BINARY);
imshow("originalimage", image);
imshow("originalbinary", binary);
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),2);
imshow("fg", fg);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),3);
cv::threshold(bg,bg,1, 128,cv::THRESH_BINARY_INV);
imshow("bg", bg);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
imshow("markers", markers);
// Create watershed segmentation object
WatershedSegmenter segmenter;
segmenter.setMarkers(markers);
cv::Mat result = segmenter.process(image);
result.convertTo(result,CV_8U);
imshow("final_result", result);
cv::waitKey(0);
return 0;
}
I took the liberty of using Abid's input image for testing and this is what I got:
Below is the simplified version of your code, and it works fine for me. Check it out :
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main ()
{
Mat image = imread("sofwatershed.jpg");
Mat binary = imread("sofwsthresh.png",0);
// Eliminate noise and smaller objects
Mat fg;
erode(binary,fg,Mat(),Point(-1,-1),2);
// Identify image pixels without objects
Mat bg;
dilate(binary,bg,Mat(),Point(-1,-1),3);
threshold(bg,bg,1,128,THRESH_BINARY_INV);
// Create markers image
Mat markers(binary.size(),CV_8U,Scalar(0));
markers= fg+bg;
markers.convertTo(markers, CV_32S);
watershed(image,markers);
markers.convertTo(markers,CV_8U);
imshow("a",markers);
waitKey(0);
}
Below is my input image :
Below is my output image :
See the code explanation here : Simple watershed Sample in OpenCV
I had the same problem as you, following the exact same code sample of the cookbook (great book btw).
Just to place the matter I was coding under Visual Studio 2013 and OpenCV 2.4.8. After a lot of searching and no solutions I decided to change the IDE.
It's still Visual Studio BUT it's 2010!!!! And boom it works!
Becareful of how you configure Visual Studio with OpenCV. Here's a great tutorial for installation here
Good day to all