When doing:
IplImage blobimg = image;
IplImage *labelImg=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_LABEL, 1);
IplImage *test=cvCreateImage(cvGetSize(&blobimg), IPL_DEPTH_8U, 3);
unsigned int result=cvLabel(&blobimg, labelImg, blobs);
cvRenderBlobs(labelImg, blobs, &blobimg,test,CV_BLOB_RENDER_BOUNDING_BOX);
Mat imgMat(test);
imshow("Depth", imgMat);
I notice that my test variable is empty :
I think I have to do this instead:
cvRenderBlobs(labelImg, blobs, &blobimg,&blobimg,CV_BLOB_RENDER_BOUNDING_BOX);
But cvRenderBlobs destImg has to have 3 channels and IPL_DEPTH_8U and my image has only 1 channel since it's a gray image.
Can someone tell me why this is and how I can fix this ?
Edit
Where image comes from:
Mat *depthImage = new Mat(480, 640, CV_8UC1, Scalar::all(0));
Mat image = *depthImage;
Will guess here but not too many times I've seen instances of IplImages that are not actually pointers. Are you sure that image, wherever it's coming from, isn't also a pointer to an IplImage struct?
IplImage *blobimg = image;
I use this portion of code in my project and it works, see if it can help:
//BYTE* blobMap = ... blobMap holds an image
CvMat mat = cvMat( HEIGHT, WIDTH, CV_8UC1, blobMap);
IplImage *img = cvCreateImage(cvSize(HEIGHT,WIDTH), IPL_DEPTH_8U, 1);
cvGetImage(&mat, img);
cvThreshold(img, img, 10, 255, CV_THRESH_BINARY);
IplImage *labelImg = cvCreateImage(cvGetSize(img),IPL_DEPTH_LABEL,1);
CvBlobs blobs;
unsigned int result = cvLabel(img, labelImg, blobs);
cvFilterByArea(blobs, 1000, 1680*HEIGHT);
IplImage *imgOut = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
cvRenderBlobs(labelImg, blobs, img, imgOut);
cvNamedWindow("test", 1);
cvShowImage("test", imgOut);
cvWaitKey(0);
cvDestroyWindow("test");
I also don't like the way you pass the Mat to an IplImage, are you sure that your input image (blobimg) is ok?
Related
I am traying to convert a cv::Mat to IplImage in pc with this caracteristcs:
opencv: 3.4.14
OS: Win 10
code: c++
An example of the differents options:
cv::Mat MBin = cv::Mat::zeros(cv::Size(64, 64), CV_32FC1);
IplImage* image0= new IplImage(MBin);
IplImage image1 = MBin;
IplImage* image2 = cvCloneImage(&(IplImage)MBin);
IplImage* image3;
image3 = cvCreateImage(cvSize(MBin.cols, MBin.rows), 8, 3);
IplImage image4 = MBin;
cvCopy(&image4, image3);
Where imageX appears produces the title error.
This is the only solution, which doesn't generate compiler error:
#include <opencv2/core/types_c.h>
Mat Img = imread("1.jpg");
IplImage IBin_2 = cvIplImage(MBin);
IplImage* IBin = &IBin_2;
Before opencv3.x, Mat has a constructor Mat(const IplImage* img, bool copyData=false);. But in opencv3.x, Mat(const IplImage* img, bool copyData=false); constructor is canceled.
So, you could refer to the following example to convert Mat to IplImage.
//Mat—>IplImage
//EXAMPLE:
//shallow copy:
Mat Img=imread("1.jpg");
IplImage* pBinary = &IplImage(Img);
//For a deep copy, just add another copy of the data:
IplImage *input = cvCloneImage(pBinary)
Also, you could refer to this link for more information.
//opencv 4.5.2
IplImage* IplImage_img = cvCreateImage(cvSize(img.cols, img.rows), 8, 1);
cv::Mat MatImg(img.rows, img.cols, CV_8U, cv::Scalar(0));
MatImg = cv::cvarrToMat(IplImage_img);
img.copyTo(MatImg);
I have to transform QImage to cv::Mat, if I use technique described in similar topics, I receive different numbers of contours (7--8) and strange result matrix, but if I do
QImage im;
im.save ("tmp.bmp");
cv::Mat rImage;
rImage = cv::imread ("tmp.bmp", CV_LOAD_IMAGE_GRAYSCALE);
function findContours works fine and properly. What is the difference between these techniques and which way I can archive equal results between these approaches ?
Your code works for me.
int main(int argc, char *argv[]){
QImage img(QString("lena.bmp"));
QImage img2 = img.convertToFormat(QImage::Format_RGB32);
cv::Mat imageMat = qimage_to_cvmat_copy(img2, CV_8UC4);
cv::namedWindow("lena");
cv::imshow("lena", imageMat);
cv::waitKey(0);
}
cv::Mat qimage_to_cvmat_copy(const QImage &img, int format)
{
uchar* b = const_cast<uchar*> (img.bits ());
int c = img.bytesPerLine();
return cv::Mat(img.height(), img.width(), format, b, c).clone();
}
Make sure your Mat format is CV_8UC4 if your QImage format is Format_RGB32. You don't have to do a cvtColor or mixChannels.
All !
As mentioned above I used conversion QImage to cv::Mat as described here. My source code became something like this
QImage srcIm (argv[1]);
QImage img2 = srcIm.convertToFormat(QImage::Format_ARGB32);
Mat src_gray = QImageToCvMat (img2);
cvtColor (src_gray, src_gray1, CV_RGB2GRAY);
Mat bwimg = src_gray1.clone();// > 127;
vector<vector<Point> > contours;
findContours( bwimg, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE );
All works fine.
I try to create mat object from uchar* . I could not find a useful conversion. My code is below ;
uchar* urgbImg; // this value is created another function
Mat img_argb(HEIGHT, WIDTH, CV_8UC4, urgbImg);
Mat img_rgb(HEIGHT, WIDTH, CV_8UC3);
img_argb.convertTo(img_rgb, CV_8UC3);
cv::imwrite("RGB.png", img_rgb);
QImage img1(urgbImg, WIDTH, HEIGHT, QImage::Format_ARGB32);
QImage img2 = img1.convertToFormat(QImage::Format_RGB32);
QFile file2(QString::fromStdString("QRGB.png"));
file2.open(QIODevice::WriteOnly);
img2.save(&file2,"PNG",100);
file2.close();
QRGB file is a fine result but not RGB file. I have uchar array so it is 8 bit.
I tried CV_8UC3(without conversion from CV_8UC4), CV_32SC3 and CV_32SC4. All result are bad. How can I create rgb image from uchar* ?
convertTo() don't convert a CV_8UC4 image to CV_8UC3
you should use cvtColor() function
Mat img_argb(HEIGHT, WIDTH, CV_8UC4, urgbImg);
Mat img_rgb;
cvtColor(img_argb,img_rgb,COLOR_BGRA2BGR);
I want to get an image in which only a region has color given a color image.
Mat img = imread("lena.jpg");
Rect roi = Rect(100, 100, 300, 300);// only this should be in color in output
Mat img_yuv;
cvtColor(img, img_yuv, CV_RGB2YUV);
vector<Mat> channels(3);
split(img_yuv, channels);
Mat Y = channels[0];
Mat U = channels[1];
Mat V = channels[2];
// create mask
Mat mask = Mat::zeros(Y.size(), Y.type());
rectangle(mask, roi, Scalar(1), CV_FILLED);
// merging channels
channels[0] = Y;
channels[1] = U.mul(mask)+(Scalar::all(1)-mask).mul(Y);
channels[2] = V.mul(mask)+(Scalar::all(1)-mask).mul(Y);
Mat img_yuv_out, img_out;
merge(channels, img_yuv_out);
cvtColor(img_yuv_out, img_out, CV_YUV2RGB);
imshow("masked_color", img_out);
imshow("lena", img);
with the above opencv code here are my imput and output images respectively.
In the roi it works fine but rest of image doesn't look like a grayscale image(not exactly as we still have 3 channels.
You could try this:
get a copy of the image, and convert it to 3-channel grayscale (I don't know if you need to convert the grayscale explicitly back to (colored) RGB...)
get a Mat for the ROI you want to have the colors in, once for the grayscale copy and once for the original color image
assign/copy the color image ROI to the grayscale image ROI
Indeed, it's exactly as #AndreyKamaev suggests:
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
int main() {
char const * const fname_in = "lena.jpg";
char const * const fname_out = "lena_out.jpg";
cv::Mat img = cv::imread(fname_in, CV_LOAD_IMAGE_COLOR);
cv::Mat tmp;
cv::cvtColor(img, tmp, CV_BGR2GRAY);
cv::cvtColor(tmp, tmp, CV_GRAY2BGR);
cv::Rect roi(100, 100, 300, 300);
img(roi).copyTo(tmp(roi));
img = tmp;
cv::imwrite(fname_out, img);
}
Output image:
Basically the same as #moooeeeep suggests:
Mat tmp;
cvtColor(img, tmp, COLOR_BGR2GRAY);
cvtColor(tmp, tmp, COLOR_GRAY2BGR);
img(roi).copyTo(tmp(roi));
img = tmp;
To convert roi into grayscale in python you can
Convert the roi to gray
Replace roi by merging the gray, such that [gray, gray, gray] inplace of BGR
Now the roi is gray.
image = cv2.imread(image.jpg')
h, w, _ = image.shape
r, c, s = h//4, w//4, min(h,w)//2
gray_portion = cv2.bitwise_not(cv2.cvtColor(image[r:r+s, c:c+s], cv2.COLOR_BGR2GRAY))
merged = cv2.merge([gray_portion, gray_portion, gray_portion]) #IMPORTANT
image[r:r+s, c:c+s] = merged
cv2.imshow(image)
For those who want to the other way round :-)
Mat tmp;
cvtColor(img, tmp, COLOR_BGR2GRAY);
cvtColor(tmp, tmp, COLOR_GRAY2BGR);
tmp(roi).copyTo(img(roi));
tmp = img;
I want to see if a template in present in an image using openCv and c++. However due to different distance at which the image is taken and different position of the image, the match does not occur correctly.
here is my code:
IplImage* image = cvLoadImage("C:/images/Photo0734.jpg", 1);
IplImage* templat = cvLoadImage("C:/images/templatecoin.jpg", 1);
int percent =25;// declare a destination IplImage object with correct size,
depth and channels
IplImage* image3 = cvCreateImage( cvSize((int)((image->width*percent)/100) ,
(int)((image->height*percent)/100) ),image->depth, image->nChannels );
//use cvResize to resize source to a destination image
cvResize(image, image3);
IplImage* image2 = cvCreateImage(cvSize(image3->width, image3->height),
IPL_DEPTH_8U, 1);
IplImage* templat2 = cvCreateImage(cvSize(templat->width,
templat->height), IPL_DEPTH_8U, 1);
cvCvtColor(image3, image2, CV_BGR2GRAY);
cvCvtColor(templat, templat2, CV_BGR2GRAY);
int w = image3->width - templat->width + 1;
int h = image3->height - templat->height + 1;
result = cvCreateImage(cvSize(w, h), IPL_DEPTH_32F, 1);
cvMatchTemplate(image2, templat2, result, CV_TM_CCORR_NORMED);
double min_val, max_val;
CvPoint min_loc, max_loc;
cvMinMaxLoc(result, &min_val, &max_val, &min_loc, &max_loc);
cvRectangle(image3, max_loc, cvPoint(max_loc.x+templat->width,
max_loc.y+templat->height), cvScalar(0,1,1), 1);
cvShowImage("src", image3);
//cvShowImage("result image", result);
cvWaitKey(0);
Please note that I am Unable to use "Mat". Is it possible to use IplImage* and enable the code to be invariant to scaling and rotation? help me.
Let have a look to that :
SIFT Wiki
SIFT example
OpenCV SIFT documentation
I think that can be usefull for you.