Hi
I have read the openCV reference from this site and using the following code:
VideoCapture mCap;
Mat mcolImage, mbwImage;
mCap >> mcolImage; // New frames from the camera
cvtColor( mcolImage, mcolImage, CV_BGR2RGB);
cvtColor( mcolImage, mbwImage, CV_RGB2GRAY);
QImage colImagetmp( (uchar*)mcolImage.data, mcolImage.cols, mcolImage.rows, mcolImage.step,
QImage::Format_RGB888 ); //Colour
QImage bwImagetmp ( (uchar*)mbwImage.data, mbwImage.cols, mbwImage.rows, mbwImage.step,
QImage::Format_Indexed8); //Greyscale. I hope
ui.bwDisplay->setPixmap(QPixmap::fromImage(bwImagetmp));
ui.colDisplay->setPixmap( QPixmap::fromImage(colImagetmp ));
I'm trying to convert one of the output into greyscale. Unfortunately they're still both in colour and I can't see that I've missed a step somewhere.
Thanks for the help.
You need to explicitly set a gray color table for bwImagetmp:
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++) colorTable.push_back(qRgb(i, i, i));
bwImagetmp.setColorTable(colorTable);
Related
I'm trying to change the brightness of an image by coverting it from BGR to LAB and changing the L parameter to L+brightness. It works to change the brightness but the output image is blue , why?
void MainWindow::BrightnessSlider(cv::Mat image)
{
cv::Mat image2;
cv::cvtColor(image,image2,cv::COLOR_BGR2Lab);
for (int i=0; i < image2.rows; i++)
{
for (int j=0; j < image2.cols; j++)
{
image2.at<cv::Vec3b>(i,j)[0] = cv::saturate_cast<uchar>(image2.at<cv::Vec3b>(i,j)[0] + brightness);
}
}
cv::cvtColor(image2,image2,cv::COLOR_Lab2BGR);
QImage imageupdate= QImage((const unsigned char*)(image2.data), image2.cols,image2.rows,QImage::Format_RGB888);
int w = ui->label->width();
int h =ui-> label->height();
ui->label->setPixmap(QPixmap::fromImage(imageupdate.scaled(w,h,Qt::KeepAspectRatio)));
}
The main problem here is that 3-channel color images in OpenCV use BGR memory layout, while in Qt they use RGB memory layout. That's why your image shown in QLabel looks "blue".
To fix the memory layout problem, you should change cv::COLOR_Lab2BGR to cv::COLOR_Lab2RGB in the second cv::cvtColor():
cv::cvtColor(image2, image2, cv::COLOR_Lab2RGB);
Or append .rgbSwapped() to imageupdate (note that imageupdate will not share memory block with image2):
QImage imageupdate = QImage((const unsigned char*)(image2.data),
image2.cols, image2.rows, QImage::Format_RGB888).rgbSwapped();
BTW, you can just use Mat::operator+(const Scalar&) to change the value for all pixels, the color conversion and for-loops are unnecessary:
cv::Mat image2 = image + cv::Scalar::all(brightness);
// convert BGR to RGB if you don't want to allocate additional memory
// for imageupdate with QImage::rgbSwapped():
cv::cvtColor(image2, image2, cv::COLOR_BGR2RGB);
I'm using OpenCV in C++ to process a cv::Mat before printing it to a ROS topic. For some reason cv::drawKeypoints messes up my result by virtually stretching it over the width beyond the image frame:
. The blob in the right topic represents the one on the top left in the left topic.
Here's my code:
image_transport::Publisher pubthresh;
image_transport::Publisher pubkps;
cv::SimpleBlobDetector detector;
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
cv::Mat mat = cv_bridge::toCvShare(msg, "bgr8")->image;
cv::cvtColor(mat,mat, CV_BGR2GRAY );
cv::threshold(mat,mat,35,255,0);
std::vector<cv::KeyPoint> keypoints;
detector.detect(mat, keypoints);
cv::Mat kps;
cv::drawKeypoints( mat, keypoints, kps, cv::Scalar(0,0,255), cv::DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
sensor_msgs::ImageConstPtr ithresh,ikps;
ithresh = cv_bridge::CvImage(std_msgs::Header(), "mono8", mat).toImageMsg();
ikps = cv_bridge::CvImage(std_msgs::Header(), "mono8", kps).toImageMsg();
pubthresh.publish(ithresh);
pubkps.publish(ikps);
}
int main(int argc, char **argv)
{
...
image_transport::Subscriber sub = it.subscribe("/saliency_map", 1, imageCallback);
...
}
After the cv::drawKeypoints operation both cv::Mat are treated the same. According to the documentation the image shouldn't get resized either. What am I missing here?
Looks like your result image isn't grayscale but color image.
Stretching here means, that each pixel becomes implicitly 3x the size in horizontal direction, because of having 3 channels, which are interpreted as grayscale values.
So try to convert kps to grayscale before using your publishing stuff.
cv::cvtColor(kps,kps, CV_BGR2GRAY );
Or adjust the line
ikps = cv_bridge::CvImage(std_msgs::Header(), "mono8", kps).toImageMsg();
to publish a bgr color image instead of "mono8". But I don't know how to use that code.
I want to capture an image and use it as a gray level image.
I have the following code:
CvCapture *p = cvCreateCameraCapture(0);
cvSetCaptureProperty(p, CV_CAP_PROP_FRAME_WIDTH, 1024);
cvSetCaptureProperty(p, CV_CAP_PROP_FRAME_HEIGHT, 1024);
IplImage* frame;
for (int i = 0; i < 25; i++)
{
frame = cvQueryFrame(p);
}
cvSaveImage("test.jpg", frame);
Mat r = imread("test.jpg", 1);
Mat inputImage;
cvtColor(r, inputImage, COLOR_RGB2GRAY);
In my code frame is an RGB image (three dimensions). when I read the saved image with r it has two channels.
I have two questions:
why this happens?
how can I have one dimensional image which is gray level?
For your first question: You should check which camera/hardware you are using? Also, confirm if frame has 2 channels by running:
cout << img->nChannels << endl;
For second part:
To read image as gray channel, change:
Mat r = imread("test.jpg", 1);
to
Mat r = imread("test.jpg", 0);
See this: docs
I want to implement a OCR feature.
I have collected some samples and i want to use K-Nearest to implement it.
So, i use the below code to load data and initialize KNearest
KNearest knn = new KNearest;
Mat mData, mClass;
for (int i = 0; i <= 9; ++i)
{
Mat mImage = imread( FILENAME ); // the filename format is '%d.bmp', presenting a 15x15 image
Mat mFloat;
if (mImage.empty()) break; // if the file doesn't exist
mImage.convertTo(mFloat, CV_32FC1);
mData.push_back(mFloat.reshape(1, 1));
mClass.push_back( '0' + i );
}
knn->train(mData, mClass);
Then, i call the code to find best result
for (vector<Mat>::iterator it = charset.begin(); it != charset.end(); ++it)
{
Mat mFloat;
it->convertTo(mFloat, CV_32FC1); // 'it' presents a 15x15 gray image
float result = knn->find_nearest(mFloat.reshape(1, 1), knn->get_max_k());
}
But, my application crashes at find_nearest.
Anyone could help me?
I seemed to find the problem...
My sample image is a converted gray image by cvtColor, but my input image isn't.
After i add
cvtColor(mImage, mImage, COLOR_BGR2GRAY);
between
if (mImage.empty()) break;
mImage.convertTo(mFloat, CV_32FC1);
find_nearest() return a value and my application is fine.
I am getting a float point image in the range of 0 to 1. I tried to display this float image in Qt using label but it didn't work.
I checked the QImage formats and found that Qt don't support float image. now i am trying it in this way..
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++)
colorTable.push_back(qRgb(i, i, i));
Mat img2;
DepthImg.convertTo(img2, CV_8UC1);
assert(img2.isContinuous());
QImage image = QImage((unsigned char*)(img2.data), DepthImg.cols, DepthImg.rows, QImage::Format_Indexed8);
image.setColorTable(colorTable);
ui.ThreeD->setPixmap(QPixmap::fromImage(image, Qt::AutoColor));
ui.ThreeD->setScaledContents(true);
qApp->processEvents();
this->ui.ThreeD->show();
Here "DepthImage" is a float point image.
Mat img2;
DepthImg.convertTo(img2, CV_8UC1);
assert(img2.isContinuous());
QImage image = QImage((unsigned char*)img2.data, img2.cols, img2.rows, img2.cols*3, QImage::Format_RGB888);
ui.ThreeD->setPixmap(QPixmap::fromImage(image, Qt::AutoColor));
ui.ThreeD->setScaledContents(true);
qApp->processEvents();
this->ui.ThreeD->show();
After executing it in both ways i am getting only a black image.
Can any body help me to solve this issue ?
You need scale image by factor 255.
You can do it by changing
DepthImg.convertTo(img2, CV_8UC1);
with
DepthImg.convertTo(img2, CV_8UC1, 255.0);