I have to transform QImage to cv::Mat, if I use technique described in similar topics, I receive different numbers of contours (7--8) and strange result matrix, but if I do
QImage im;
im.save ("tmp.bmp");
cv::Mat rImage;
rImage = cv::imread ("tmp.bmp", CV_LOAD_IMAGE_GRAYSCALE);
function findContours works fine and properly. What is the difference between these techniques and which way I can archive equal results between these approaches ?
Your code works for me.
int main(int argc, char *argv[]){
QImage img(QString("lena.bmp"));
QImage img2 = img.convertToFormat(QImage::Format_RGB32);
cv::Mat imageMat = qimage_to_cvmat_copy(img2, CV_8UC4);
cv::namedWindow("lena");
cv::imshow("lena", imageMat);
cv::waitKey(0);
}
cv::Mat qimage_to_cvmat_copy(const QImage &img, int format)
{
uchar* b = const_cast<uchar*> (img.bits ());
int c = img.bytesPerLine();
return cv::Mat(img.height(), img.width(), format, b, c).clone();
}
Make sure your Mat format is CV_8UC4 if your QImage format is Format_RGB32. You don't have to do a cvtColor or mixChannels.
All !
As mentioned above I used conversion QImage to cv::Mat as described here. My source code became something like this
QImage srcIm (argv[1]);
QImage img2 = srcIm.convertToFormat(QImage::Format_ARGB32);
Mat src_gray = QImageToCvMat (img2);
cvtColor (src_gray, src_gray1, CV_RGB2GRAY);
Mat bwimg = src_gray1.clone();// > 127;
vector<vector<Point> > contours;
findContours( bwimg, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE );
All works fine.
Related
I'm using OpenCV in C++ to process a cv::Mat before printing it to a ROS topic. For some reason cv::drawKeypoints messes up my result by virtually stretching it over the width beyond the image frame:
. The blob in the right topic represents the one on the top left in the left topic.
Here's my code:
image_transport::Publisher pubthresh;
image_transport::Publisher pubkps;
cv::SimpleBlobDetector detector;
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
cv::Mat mat = cv_bridge::toCvShare(msg, "bgr8")->image;
cv::cvtColor(mat,mat, CV_BGR2GRAY );
cv::threshold(mat,mat,35,255,0);
std::vector<cv::KeyPoint> keypoints;
detector.detect(mat, keypoints);
cv::Mat kps;
cv::drawKeypoints( mat, keypoints, kps, cv::Scalar(0,0,255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
sensor_msgs::ImageConstPtr ithresh,ikps;
ithresh = cv_bridge::CvImage(std_msgs::Header(), "mono8", mat).toImageMsg();
ikps = cv_bridge::CvImage(std_msgs::Header(), "mono8", kps).toImageMsg();
pubthresh.publish(ithresh);
pubkps.publish(ikps);
}
int main(int argc, char **argv)
{
...
image_transport::Subscriber sub = it.subscribe("/saliency_map", 1, imageCallback);
...
}
After the cv::drawKeypoints operation both cv::Mat are treated the same. According to the documentation the image shouldn't get resized either. What am I missing here?
Looks like your result image isn't grayscale but color image.
Stretching here means, that each pixel becomes implicitly 3x the size in horizontal direction, because of having 3 channels, which are interpreted as grayscale values.
So try to convert kps to grayscale before using your publishing stuff.
cv::cvtColor(kps,kps, CV_BGR2GRAY );
Or adjust the line
ikps = cv_bridge::CvImage(std_msgs::Header(), "mono8", kps).toImageMsg();
to publish a bgr color image instead of "mono8". But I don't know how to use that code.
I try to create mat object from uchar* . I could not find a useful conversion. My code is below ;
uchar* urgbImg; // this value is created another function
Mat img_argb(HEIGHT, WIDTH, CV_8UC4, urgbImg);
Mat img_rgb(HEIGHT, WIDTH, CV_8UC3);
img_argb.convertTo(img_rgb, CV_8UC3);
cv::imwrite("RGB.png", img_rgb);
QImage img1(urgbImg, WIDTH, HEIGHT, QImage::Format_ARGB32);
QImage img2 = img1.convertToFormat(QImage::Format_RGB32);
QFile file2(QString::fromStdString("QRGB.png"));
file2.open(QIODevice::WriteOnly);
img2.save(&file2,"PNG",100);
file2.close();
QRGB file is a fine result but not RGB file. I have uchar array so it is 8 bit.
I tried CV_8UC3(without conversion from CV_8UC4), CV_32SC3 and CV_32SC4. All result are bad. How can I create rgb image from uchar* ?
convertTo() don't convert a CV_8UC4 image to CV_8UC3
you should use cvtColor() function
Mat img_argb(HEIGHT, WIDTH, CV_8UC4, urgbImg);
Mat img_rgb;
cvtColor(img_argb,img_rgb,COLOR_BGRA2BGR);
i have searched a lot on the internet but i have only found how to convert Qimage to RGB format, i want to convert an Qimage to cv mat format CV_64FC3.
i have really bad results when i work with CV_8UC3
here is my code :
QImage myImage;
myImage.load("C://images//PolarImage300915163358.bmp");
QLabel myLabel;
myLabel.setPixmap(QPixmap::fromImage(myImage));
//myLabel.show();
cv::Mat image1 = QImage2Mat(myImage);
Mat img;
image1.convertTo(img, CV_64FC3, 1.0 / 255.0);
and here is the function that i used :
cv::Mat QImage2Mat(QImage const& src)
{
cv::Mat tmp(src.height(),src.width(),CV_8UC3,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result; // deep copy just in case (my lack of knowledge with open cv)
cvtColor(tmp, result,CV_BGR2RGB);
return result;
}
please help me i m new to both opencv and Qt
Not sure what you mean with bad results, but you are assuming that QImage also loads the image as OpenCV (BGR). In the documentation it tells you that they use ARGB.
So, knowing this you have 2 options:
Convert to QImage::Format_RGB888 the Qimage using the function convertToFormat and then this line cvtColor(tmp, result,CV_BGR2RGB); is not needed, since it will be already in RGB.
Use CV_8UC4 when creating the cv::Mat and then drop the first channel (channel alpha) using either split and join or mixchannels.
i have found what was going wrong, in fact, Qimage has a fourth channel for alpha so when you read the Qimage data you need to put it in CV_8UC4
here is the code :
Mat QImage2Mat(const QImage& src) {
cv::Mat mat = cv::Mat(src.height(), src.width(), CV_8UC4, (uchar*)src.bits(), src.bytesPerLine());
cv::Mat result = cv::Mat(mat.rows, mat.cols, CV_8UC3 );
int from_to[] = { 0,0, 1,1, 2,2 };
cv::mixChannels( &mat, 1, &result, 1, from_to, 3 );
return result;
}
void doCorrectIntensityVariation(Mat& image)
{
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(19,19));
Mat closed;
morphologyEx(image, closed, MORPH_CLOSE, kernel);
image.convertTo(image, CV_32F); // divide requires floating-point
divide(image, closed, image, 1, CV_32F);
normalize(image, image, 0, 255, NORM_MINMAX);
image.convertTo(image, CV_8UC1); // convert back to unsigned int
}
inline void correctIntensityVariation(IplImage *img)
{
//Mat imgMat(img); copy the img
Mat imgMat;
imgMat = img; //no copy is done, imgMat is a header of img
doCorrectIntensityVariation(imgMat);
imshow("gamma corrected",imgMat); cvWaitKey(0);
}
When I call
cvShowImage ("normal", n_im); cvWaitKey (0);
correctIntensityVariation(n_im);//here n_im is IplImage*
cvShowImage ("After processed", n_im); cvWaitKey (0);
// here I require n_im for further processing
I wanted "After processed" to be same as that of "gamma corrected" but what I found "After processed" was not the same as that of "gamma corrected" but same as that of "normal" . Why?? What is going wrong??
A very simple wrapper should do the job
Cheetsheet of openCV
I rarely use the old api, because Mat are much more easier to deal with, and
they do not have performance penalty when compare with the old c api.Like the openCV
tutorial page say The main downside of the C++ interface is that many embedded development systems at the moment support only C. Therefore, unless you are targeting embedded platforms, there’s no point to using the old methods (unless you’re a masochist programmer and you’re asking for trouble).
openCV tutorial
cv::Mat to Ipl
Ipl to cv::Mat and Mat to Ipl
IplImage* pImg = cvLoadImage(“lena.jpg”);
cv::Mat img(pImg,0); //transform Ipl to Mat, 0 means do not copy
IplImage qImg; //not pointer, it is impossible to overload the operator of raw pointer
qImg = IplImage(img); //transform Mat to Ipl
Edit : I did a mistake earlier, if the Mat would be reallocated in the function, you need
to copy or try to steal the resource(I don't know how to do it yet) from the Mat.
Copy the data
void doCorrectIntensityVariation(cv::Mat& image)
{
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(19,19));
cv::Mat closed;
cv::morphologyEx(image, closed, cv::MORPH_CLOSE, kernel);
image.convertTo(image, CV_32F); // divide requires floating-point
cv::divide(image, closed, image, 1, CV_32F);
cv::normalize(image, image, 0, 255, cv::NORM_MINMAX);
image.convertTo(image, CV_8UC1); // convert back to unsigned int
}
//don't need to change the name of the function, the compiler treat
//these as different function in c++
void doCorrectIntensityVariation(IplImage **img)
{
cv::Mat imgMat;
imgMat = *img; //no copy is done, imgMat is a header of img
doCorrectIntensityVariation(imgMat);
IplImage* old = *img;
IplImage src = imgMat;
*img = cvCloneImage(&src);
cvReleaseImage(&old);
}
int main()
{
std::string const name = "onebit_31.png";
cv::Mat mat = cv::imread(name);
if(mat.data){
doCorrectIntensityVariation(mat);
cv::imshow("gamma corrected mat",mat);
cv::waitKey();
}
IplImage* templat = cvLoadImage(name.c_str(), 1);
if(templat){
doCorrectIntensityVariation(&templat);
cvShowImage("mainWin", templat);
// wait for a key
cvWaitKey(0);
cvReleaseImage(&templat);
}
return 0;
}
you could write a small function to alleviate the chores
void copy_mat_Ipl(cv::Mat const &src, IplImage **dst)
{
IplImage* old = *dst;
IplImage temp_src = src;
*dst = cvCloneImage(&temp_src);
cvReleaseImage(&old);
}
and call it in the function
void doCorrectIntensityVariation(IplImage **img)
{
cv::Mat imgMat;
imgMat = *img; //no copy is done, imgMat is a header of img
doCorrectIntensityVariation(imgMat);
copy_mat_to_Ipl(imgMat, img);
}
I will post how to "steal" the resource from Mat rather than copy after
I figure out a solid solution.Anyone know how to do it?
When doing:
IplImage blobimg = image;
IplImage *labelImg=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_LABEL, 1);
IplImage *test=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_8U, 3);
unsigned int result=cvLabel(&blobimg, labelImg, blobs);
cvRenderBlobs(labelImg, blobs, &blobimg,test,CV_BLOB_RENDER_BOUNDING_BOX);
Mat imgMat(test);
imshow("Depth", imgMat);
I notice that my test variable is empty :
I think I have to do this instead:
cvRenderBlobs(labelImg, blobs, &blobimg,&blobimg,CV_BLOB_RENDER_BOUNDING_BOX);
But cvRenderBlobs destImg has to have 3 channels and IPL_DEPTH_8U and my image has only 1 channel since it's a gray image.
Can someone tell me why this is and how I can fix this ?
Edit
Where image comes from:
Mat *depthImage = new Mat(480, 640, CV_8UC1, Scalar::all(0));
Mat image = *depthImage;
Will guess here but not too many times I've seen instances of IplImages that are not actually pointers. Are you sure that image, wherever it's coming from, isn't also a pointer to an IplImage struct?
IplImage *blobimg = image;
I use this portion of code in my project and it works, see if it can help:
//BYTE* blobMap = ... blobMap holds an image
CvMat mat = cvMat( HEIGHT, WIDTH, CV_8UC1, blobMap);
IplImage *img = cvCreateImage(cvSize(HEIGHT,WIDTH), IPL_DEPTH_8U, 1);
cvGetImage(&mat, img);
cvThreshold(img, img, 10, 255, CV_THRESH_BINARY);
IplImage *labelImg = cvCreateImage(cvGetSize(img),IPL_DEPTH_LABEL,1);
CvBlobs blobs;
unsigned int result = cvLabel(img, labelImg, blobs);
cvFilterByArea(blobs, 1000, 1680*HEIGHT);
IplImage *imgOut = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
cvRenderBlobs(labelImg, blobs, img, imgOut);
cvNamedWindow("test", 1);
cvShowImage("test", imgOut);
cvWaitKey(0);
cvDestroyWindow("test");
I also don't like the way you pass the Mat to an IplImage, are you sure that your input image (blobimg) is ok?