i was wondering how to use cvDCT void in opencv c++
if anyone have an example
The manual explains both the parameters and the inner workings of the function.
You can use the following code:
IplImage* src0 = cvLoadImage(strPath, CV_LOAD_IMAGE_GRAYSCALE);
IplImage* src = cvCreateImage(cvGetSize(src0), IPL_DEPTH_32F, 1);
cvConvert(src0, src);
IplImage* dst = cvCreateImage(cvGetSize(src0), IPL_DEPTH_32F, 1);
cvDCT(src, dst, 0);
//cvDCT (dst, src, 1);
//cvConvert(src, src0);
cvShowImage("Source", src0);
Related
I am traying to convert a cv::Mat to IplImage in pc with this caracteristcs:
opencv: 3.4.14
OS: Win 10
code: c++
An example of the differents options:
cv::Mat MBin = cv::Mat::zeros(cv::Size(64, 64), CV_32FC1);
IplImage* image0= new IplImage(MBin);
IplImage image1 = MBin;
IplImage* image2 = cvCloneImage(&(IplImage)MBin);
IplImage* image3;
image3 = cvCreateImage(cvSize(MBin.cols, MBin.rows), 8, 3);
IplImage image4 = MBin;
cvCopy(&image4, image3);
Where imageX appears produces the title error.
This is the only solution, which doesn't generate compiler error:
#include <opencv2/core/types_c.h>
Mat Img = imread("1.jpg");
IplImage IBin_2 = cvIplImage(MBin);
IplImage* IBin = &IBin_2;
Before opencv3.x, Mat has a constructor Mat(const IplImage* img, bool copyData=false);. But in opencv3.x, Mat(const IplImage* img, bool copyData=false); constructor is canceled.
So, you could refer to the following example to convert Mat to IplImage.
//Mat—>IplImage
//EXAMPLE:
//shallow copy:
Mat Img=imread("1.jpg");
IplImage* pBinary = &IplImage(Img);
//For a deep copy, just add another copy of the data:
IplImage *input = cvCloneImage(pBinary)
Also, you could refer to this link for more information.
//opencv 4.5.2
IplImage* IplImage_img = cvCreateImage(cvSize(img.cols, img.rows), 8, 1);
cv::Mat MatImg(img.rows, img.cols, CV_8U, cv::Scalar(0));
MatImg = cv::cvarrToMat(IplImage_img);
img.copyTo(MatImg);
void doCorrectIntensityVariation(Mat& image)
{
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(19,19));
Mat closed;
morphologyEx(image, closed, MORPH_CLOSE, kernel);
image.convertTo(image, CV_32F); // divide requires floating-point
divide(image, closed, image, 1, CV_32F);
normalize(image, image, 0, 255, NORM_MINMAX);
image.convertTo(image, CV_8UC1); // convert back to unsigned int
}
inline void correctIntensityVariation(IplImage *img)
{
//Mat imgMat(img); copy the img
Mat imgMat;
imgMat = img; //no copy is done, imgMat is a header of img
doCorrectIntensityVariation(imgMat);
imshow("gamma corrected",imgMat); cvWaitKey(0);
}
When I call
cvShowImage ("normal", n_im); cvWaitKey (0);
correctIntensityVariation(n_im);//here n_im is IplImage*
cvShowImage ("After processed", n_im); cvWaitKey (0);
// here I require n_im for further processing
I wanted "After processed" to be same as that of "gamma corrected" but what I found "After processed" was not the same as that of "gamma corrected" but same as that of "normal" . Why?? What is going wrong??
A very simple wrapper should do the job
Cheetsheet of openCV
I rarely use the old api, because Mat are much more easier to deal with, and
they do not have performance penalty when compare with the old c api.Like the openCV
tutorial page say The main downside of the C++ interface is that many embedded development systems at the moment support only C. Therefore, unless you are targeting embedded platforms, there’s no point to using the old methods (unless you’re a masochist programmer and you’re asking for trouble).
openCV tutorial
cv::Mat to Ipl
Ipl to cv::Mat and Mat to Ipl
IplImage* pImg = cvLoadImage(“lena.jpg”);
cv::Mat img(pImg,0); //transform Ipl to Mat, 0 means do not copy
IplImage qImg; //not pointer, it is impossible to overload the operator of raw pointer
qImg = IplImage(img); //transform Mat to Ipl
Edit : I did a mistake earlier, if the Mat would be reallocated in the function, you need
to copy or try to steal the resource(I don't know how to do it yet) from the Mat.
Copy the data
void doCorrectIntensityVariation(cv::Mat& image)
{
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(19,19));
cv::Mat closed;
cv::morphologyEx(image, closed, cv::MORPH_CLOSE, kernel);
image.convertTo(image, CV_32F); // divide requires floating-point
cv::divide(image, closed, image, 1, CV_32F);
cv::normalize(image, image, 0, 255, cv::NORM_MINMAX);
image.convertTo(image, CV_8UC1); // convert back to unsigned int
}
//don't need to change the name of the function, the compiler treat
//these as different function in c++
void doCorrectIntensityVariation(IplImage **img)
{
cv::Mat imgMat;
imgMat = *img; //no copy is done, imgMat is a header of img
doCorrectIntensityVariation(imgMat);
IplImage* old = *img;
IplImage src = imgMat;
*img = cvCloneImage(&src);
cvReleaseImage(&old);
}
int main()
{
std::string const name = "onebit_31.png";
cv::Mat mat = cv::imread(name);
if(mat.data){
doCorrectIntensityVariation(mat);
cv::imshow("gamma corrected mat",mat);
cv::waitKey();
}
IplImage* templat = cvLoadImage(name.c_str(), 1);
if(templat){
doCorrectIntensityVariation(&templat);
cvShowImage("mainWin", templat);
// wait for a key
cvWaitKey(0);
cvReleaseImage(&templat);
}
return 0;
}
you could write a small function to alleviate the chores
void copy_mat_Ipl(cv::Mat const &src, IplImage **dst)
{
IplImage* old = *dst;
IplImage temp_src = src;
*dst = cvCloneImage(&temp_src);
cvReleaseImage(&old);
}
and call it in the function
void doCorrectIntensityVariation(IplImage **img)
{
cv::Mat imgMat;
imgMat = *img; //no copy is done, imgMat is a header of img
doCorrectIntensityVariation(imgMat);
copy_mat_to_Ipl(imgMat, img);
}
I will post how to "steal" the resource from Mat rather than copy after
I figure out a solid solution.Anyone know how to do it?
I want to see if a template in present in an image using openCv and c++. However due to different distance at which the image is taken and different position of the image, the match does not occur correctly.
here is my code:
IplImage* image = cvLoadImage("C:/images/Photo0734.jpg", 1);
IplImage* templat = cvLoadImage("C:/images/templatecoin.jpg", 1);
int percent =25;// declare a destination IplImage object with correct size,
depth and channels
IplImage* image3 = cvCreateImage( cvSize((int)((image->width*percent)/100) ,
(int)((image->height*percent)/100) ),image->depth, image->nChannels );
//use cvResize to resize source to a destination image
cvResize(image, image3);
IplImage* image2 = cvCreateImage(cvSize(image3->width, image3->height),
IPL_DEPTH_8U, 1);
IplImage* templat2 = cvCreateImage(cvSize(templat->width,
templat->height), IPL_DEPTH_8U, 1);
cvCvtColor(image3, image2, CV_BGR2GRAY);
cvCvtColor(templat, templat2, CV_BGR2GRAY);
int w = image3->width - templat->width + 1;
int h = image3->height - templat->height + 1;
result = cvCreateImage(cvSize(w, h), IPL_DEPTH_32F, 1);
cvMatchTemplate(image2, templat2, result, CV_TM_CCORR_NORMED);
double min_val, max_val;
CvPoint min_loc, max_loc;
cvMinMaxLoc(result, &min_val, &max_val, &min_loc, &max_loc);
cvRectangle(image3, max_loc, cvPoint(max_loc.x+templat->width,
max_loc.y+templat->height), cvScalar(0,1,1), 1);
cvShowImage("src", image3);
//cvShowImage("result image", result);
cvWaitKey(0);
Please note that I am Unable to use "Mat". Is it possible to use IplImage* and enable the code to be invariant to scaling and rotation? help me.
Let have a look to that :
SIFT Wiki
SIFT example
OpenCV SIFT documentation
I think that can be usefull for you.
I have a template and I want to know if the template is present in an image. Well I have googled a lot and came to the conclusion that I need to use cvMatchTemplate and cvMinMaxLoc.
Here is my code:
image = cvLoadImage("C:/images/flower.jpg",1);
templat = cvLoadImage("C:/images/flo.jpg",1);
image2=cvCreateImage( cvSize(image->width, image->height), IPL_DEPTH_8U, 1 );
result=cvCreateImage( cvSize(image->width, image->height), IPL_DEPTH_8U, 1 );
cvZero(result);
cvZero(image2);
cvCvtColor(image,image2,CV_BGR2GRAY);
cvMatchTemplate(image2, templat,result,CV_TM_CCORR_NORMED);
double min_val=0, max_val=0;
CvPoint min_loc, max_loc;
cvMinMaxLoc(result, &min_val, &max_val, &min_loc, &max_loc);
cvRectangle(image, max_loc, cvPoint(max_loc.x+templat->width,
max_loc.y+templat->height), cvScalar(0), 1);
cvShowImage( "src", image );
cvShowImage( "result image", result);
cvWaitKey(0);
My problem is when I run the above code,a message box is displayed saying:
Unhandled exception at 0x747d812f in matching.exe: Microsoft C++ exception: cv::Exception at memory location 0x001ff6ec..
and in the black screen there is a message:
OpenCV Error: Sizes of input arguments do not match <image and template should have the same type> in unknown function, file..\..\..\..\ocv\opencv\scr\cv\cvtempl.cpp, line 356.
Please note that flower.jpg is a coloured image and flo.jpg is the gray scale of that image.
Any ideas of what is happening?
You need to convert both flower.jpg and flo.jpg to single-channel image. Even if flo.jpg is grayscale, you're loading it as three-channel image. Also the result image should be IPL_DEPTH_32F instad of IPL_DEPTH_8U.
Here is the correct code (untested):
IplImage* image = cvLoadImage("C:/images/flower.jpg", 1);
IplImage* templat = cvLoadImage("C:/images/flo.jpg", 1);
IplImage* image2 = cvCreateImage(cvSize(image->width, image->height), IPL_DEPTH_8U, 1);
IplImage* templat2 = cvCreateImage(cvSize(templat->width, templat->height), IPL_DEPTH_8U, 1);
cvCvtColor(image, image2, CV_BGR2GRAY);
cvCvtColor(templat, templat2, CV_BGR2GRAY);
int w = image->width - templat->width + 1;
int h = image->height - templat->height + 1;
result = cvCreateImage(cvSize(w, h), IPL_DEPTH_32F, 1);
cvMatchTemplate(image2, templat, result, CV_TM_CCORR_NORMED);
double min_val, max_val;
CvPoint min_loc, max_loc;
cvMinMaxLoc(result, &min_val, &max_val, &min_loc, &max_loc);
cvRectangle(image, max_loc, cvPoint(max_loc.x+templat->width,
max_loc.y+templat->height), cvScalar(0), 1);
cvShowImage("src", image);
cvShowImage("result image", result);
cvWaitKey(0);
Template matching assumes that both image and template have identical number of channels and channel depth. The simplest way to do is to load both of them in grayscale:
Mat I = imread("lena.png", 0);
Mat T = imread("template.png", 0);
Notes: I would command to use OpenCV2.0 C++ interface. So instead of cvLoadImage use imread. The old interface is no longer developed.
When doing:
IplImage blobimg = image;
IplImage *labelImg=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_LABEL, 1);
IplImage *test=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_8U, 3);
unsigned int result=cvLabel(&blobimg, labelImg, blobs);
cvRenderBlobs(labelImg, blobs, &blobimg,test,CV_BLOB_RENDER_BOUNDING_BOX);
Mat imgMat(test);
imshow("Depth", imgMat);
I notice that my test variable is empty :
I think I have to do this instead:
cvRenderBlobs(labelImg, blobs, &blobimg,&blobimg,CV_BLOB_RENDER_BOUNDING_BOX);
But cvRenderBlobs destImg has to have 3 channels and IPL_DEPTH_8U and my image has only 1 channel since it's a gray image.
Can someone tell me why this is and how I can fix this ?
Edit
Where image comes from:
Mat *depthImage = new Mat(480, 640, CV_8UC1, Scalar::all(0));
Mat image = *depthImage;
Will guess here but not too many times I've seen instances of IplImages that are not actually pointers. Are you sure that image, wherever it's coming from, isn't also a pointer to an IplImage struct?
IplImage *blobimg = image;
I use this portion of code in my project and it works, see if it can help:
//BYTE* blobMap = ... blobMap holds an image
CvMat mat = cvMat( HEIGHT, WIDTH, CV_8UC1, blobMap);
IplImage *img = cvCreateImage(cvSize(HEIGHT,WIDTH), IPL_DEPTH_8U, 1);
cvGetImage(&mat, img);
cvThreshold(img, img, 10, 255, CV_THRESH_BINARY);
IplImage *labelImg = cvCreateImage(cvGetSize(img),IPL_DEPTH_LABEL,1);
CvBlobs blobs;
unsigned int result = cvLabel(img, labelImg, blobs);
cvFilterByArea(blobs, 1000, 1680*HEIGHT);
IplImage *imgOut = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
cvRenderBlobs(labelImg, blobs, img, imgOut);
cvNamedWindow("test", 1);
cvShowImage("test", imgOut);
cvWaitKey(0);
cvDestroyWindow("test");
I also don't like the way you pass the Mat to an IplImage, are you sure that your input image (blobimg) is ok?