I try to create mat object from uchar* . I could not find a useful conversion. My code is below ;
uchar* urgbImg; // this value is created another function
Mat img_argb(HEIGHT, WIDTH, CV_8UC4, urgbImg);
Mat img_rgb(HEIGHT, WIDTH, CV_8UC3);
img_argb.convertTo(img_rgb, CV_8UC3);
cv::imwrite("RGB.png", img_rgb);
QImage img1(urgbImg, WIDTH, HEIGHT, QImage::Format_ARGB32);
QImage img2 = img1.convertToFormat(QImage::Format_RGB32);
QFile file2(QString::fromStdString("QRGB.png"));
file2.open(QIODevice::WriteOnly);
img2.save(&file2,"PNG",100);
file2.close();
QRGB file is a fine result but not RGB file. I have uchar array so it is 8 bit.
I tried CV_8UC3(without conversion from CV_8UC4), CV_32SC3 and CV_32SC4. All result are bad. How can I create rgb image from uchar* ?
convertTo() don't convert a CV_8UC4 image to CV_8UC3
you should use cvtColor() function
Mat img_argb(HEIGHT, WIDTH, CV_8UC4, urgbImg);
Mat img_rgb;
cvtColor(img_argb,img_rgb,COLOR_BGRA2BGR);
Related
like the title says I am trying to convert a cv::mat to a QImage. What I am doing is using the equalizeHist() function on the mat and then converting it to a QImage to display in widget window in Qt. I know the mat works and loads the image correctly because the equalized image will show in the new window with imshow(), however when converting this mat to a QImage, I can not get it to display in the window. I believe the problem is with the conversion from the mat to QImage but cant find the issue. Below is a part of my code snippet.
Mat image2= imread(directoryImage1.toStdString(),0);
//cv::cvtColor(image2,image2,COLOR_BGR2GRAY);
Mat histEquImg;
equalizeHist(image2,histEquImg);
imshow("Histogram Equalized Image 2", histEquImg);
//QImage img=QImage((uchar*) histEquImg.data, histEquImg.cols, histEquImg.rows, histEquImg.step, QImage::Format_ARGB32);
imageObject= new QImage((uchar*) histEquImg.data, histEquImg.cols, histEquImg.rows, histEquImg.step, QImage::Format_RGB888);
image = QPixmap::fromImage(*imageObject);
scene=new QGraphicsScene(this); //create a frame for image 2
scene->addPixmap(image); //put image 1 inside of the frame
ui->graphicsView_4->setScene(scene); //put the frame, which contains image 3, to the GUI
ui->graphicsView_4->fitInView(scene->sceneRect(),Qt::KeepAspectRatio); //keep the dimension ratio of image 3
No errors occur and the program doesnt crash.
Thanks in advance.
Your problem is the conversion of the QImage to cv::Mat, when using the flag 0 in cv::imread implies the reading is grayscale, and you are using the conversion with the format QImage::Format_RGB888. I use the following function to make the conversion of cv::Mat to QImage:
static QImage MatToQImage(const cv::Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS=1
if(mat.type()==CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i=0; i<256; i++)
colorTable.push_back(qRgb(i,i,i));
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
if(mat.type()==CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
return QImage();
}
After that I see that you have misconceptions of how QGraphicsView and QGraphicsScene work when commenting: put the frame, which contains image 3, to the GUI, with ui->graphicsView_4->setScene(scene); you are not setting a frame but a scene, and the scene should only be set once and preferably in the constructor.
// constructor
scene = new QGraphicsScene(this);
ui->graphicsView->setScene(scene);
So when you want to load the image just use the scene:
cv::Mat image= cv::imread(filename.toStdString(), CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat histEquImg;
equalizeHist(image, histEquImg);
QImage qimage = MatToQImage(histEquImg);
QPixmap pixmap = QPixmap::fromImage(qimage);
scene->addPixmap(pixmap);
ui->graphicsView->fitInView(scene->sceneRect(), Qt::KeepAspectRatio);
The complete example can be found in the following link.
I have a 32-bit integer array containing pixel values of a 3450x3450 image I want to create a Mat image with. Tried the following:
int *image_array;
image_array = (int *)malloc( 3450*3450*sizeof(int) );
memset( (char *)image_array, 0, sizeof(int)*3450*3450 );
image_array[0] = intensity_of_first_pixel;
...
image_array[11902499] = intensity_of_last_pixel;
Mat M(3450, 3450, CV_32FC1, image_array);
and upon displaying the image I get a black screen. I should also note the array contains a 16-bit grayscale image.
I guess you should try to convert the input image, which I assume is in RGB[A] format using:
cv::Mat m(3450, 3450, CV_8UC1, image_array) // For GRAY image
cv::Mat m(3450, 3450, CV_8UC3, image_array) // For RGB image
cv::Mat m(3450, 3450, CV_8UC4, image_array) // For RGBA image
I have to transform QImage to cv::Mat, if I use technique described in similar topics, I receive different numbers of contours (7--8) and strange result matrix, but if I do
QImage im;
im.save ("tmp.bmp");
cv::Mat rImage;
rImage = cv::imread ("tmp.bmp", CV_LOAD_IMAGE_GRAYSCALE);
function findContours works fine and properly. What is the difference between these techniques and which way I can archive equal results between these approaches ?
Your code works for me.
int main(int argc, char *argv[]){
QImage img(QString("lena.bmp"));
QImage img2 = img.convertToFormat(QImage::Format_RGB32);
cv::Mat imageMat = qimage_to_cvmat_copy(img2, CV_8UC4);
cv::namedWindow("lena");
cv::imshow("lena", imageMat);
cv::waitKey(0);
}
cv::Mat qimage_to_cvmat_copy(const QImage &img, int format)
{
uchar* b = const_cast<uchar*> (img.bits ());
int c = img.bytesPerLine();
return cv::Mat(img.height(), img.width(), format, b, c).clone();
}
Make sure your Mat format is CV_8UC4 if your QImage format is Format_RGB32. You don't have to do a cvtColor or mixChannels.
All !
As mentioned above I used conversion QImage to cv::Mat as described here. My source code became something like this
QImage srcIm (argv[1]);
QImage img2 = srcIm.convertToFormat(QImage::Format_ARGB32);
Mat src_gray = QImageToCvMat (img2);
cvtColor (src_gray, src_gray1, CV_RGB2GRAY);
Mat bwimg = src_gray1.clone();// > 127;
vector<vector<Point> > contours;
findContours( bwimg, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE );
All works fine.
i have searched a lot on the internet but i have only found how to convert Qimage to RGB format, i want to convert an Qimage to cv mat format CV_64FC3.
i have really bad results when i work with CV_8UC3
here is my code :
QImage myImage;
myImage.load("C://images//PolarImage300915163358.bmp");
QLabel myLabel;
myLabel.setPixmap(QPixmap::fromImage(myImage));
//myLabel.show();
cv::Mat image1 = QImage2Mat(myImage);
Mat img;
image1.convertTo(img, CV_64FC3, 1.0 / 255.0);
and here is the function that i used :
cv::Mat QImage2Mat(QImage const& src)
{
cv::Mat tmp(src.height(),src.width(),CV_8UC3,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result; // deep copy just in case (my lack of knowledge with open cv)
cvtColor(tmp, result,CV_BGR2RGB);
return result;
}
please help me i m new to both opencv and Qt
Not sure what you mean with bad results, but you are assuming that QImage also loads the image as OpenCV (BGR). In the documentation it tells you that they use ARGB.
So, knowing this you have 2 options:
Convert to QImage::Format_RGB888 the Qimage using the function convertToFormat and then this line cvtColor(tmp, result,CV_BGR2RGB); is not needed, since it will be already in RGB.
Use CV_8UC4 when creating the cv::Mat and then drop the first channel (channel alpha) using either split and join or mixchannels.
i have found what was going wrong, in fact, Qimage has a fourth channel for alpha so when you read the Qimage data you need to put it in CV_8UC4
here is the code :
Mat QImage2Mat(const QImage& src) {
cv::Mat mat = cv::Mat(src.height(), src.width(), CV_8UC4, (uchar*)src.bits(), src.bytesPerLine());
cv::Mat result = cv::Mat(mat.rows, mat.cols, CV_8UC3 );
int from_to[] = { 0,0, 1,1, 2,2 };
cv::mixChannels( &mat, 1, &result, 1, from_to, 3 );
return result;
}
When doing:
IplImage blobimg = image;
IplImage *labelImg=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_LABEL, 1);
IplImage *test=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_8U, 3);
unsigned int result=cvLabel(&blobimg, labelImg, blobs);
cvRenderBlobs(labelImg, blobs, &blobimg,test,CV_BLOB_RENDER_BOUNDING_BOX);
Mat imgMat(test);
imshow("Depth", imgMat);
I notice that my test variable is empty :
I think I have to do this instead:
cvRenderBlobs(labelImg, blobs, &blobimg,&blobimg,CV_BLOB_RENDER_BOUNDING_BOX);
But cvRenderBlobs destImg has to have 3 channels and IPL_DEPTH_8U and my image has only 1 channel since it's a gray image.
Can someone tell me why this is and how I can fix this ?
Edit
Where image comes from:
Mat *depthImage = new Mat(480, 640, CV_8UC1, Scalar::all(0));
Mat image = *depthImage;
Will guess here but not too many times I've seen instances of IplImages that are not actually pointers. Are you sure that image, wherever it's coming from, isn't also a pointer to an IplImage struct?
IplImage *blobimg = image;
I use this portion of code in my project and it works, see if it can help:
//BYTE* blobMap = ... blobMap holds an image
CvMat mat = cvMat( HEIGHT, WIDTH, CV_8UC1, blobMap);
IplImage *img = cvCreateImage(cvSize(HEIGHT,WIDTH), IPL_DEPTH_8U, 1);
cvGetImage(&mat, img);
cvThreshold(img, img, 10, 255, CV_THRESH_BINARY);
IplImage *labelImg = cvCreateImage(cvGetSize(img),IPL_DEPTH_LABEL,1);
CvBlobs blobs;
unsigned int result = cvLabel(img, labelImg, blobs);
cvFilterByArea(blobs, 1000, 1680*HEIGHT);
IplImage *imgOut = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
cvRenderBlobs(labelImg, blobs, img, imgOut);
cvNamedWindow("test", 1);
cvShowImage("test", imgOut);
cvWaitKey(0);
cvDestroyWindow("test");
I also don't like the way you pass the Mat to an IplImage, are you sure that your input image (blobimg) is ok?