Convert a QImage to grayscale - c++

I have a QImage and I need to convert it to grayscale, then later paint over that with colors. I found an allGray() and isGrayScale() function to check if an image is already grayscale, but no toGrayScale() or similarly-named function.
Right now I'm using this code, but it's does not have a very good performance:
for (int ii = 0; ii < image.width(); ii++) {
for (int jj = 0; jj < image.height(); jj++) {
int gray = qGray(image.pixel(ii, jj));
image.setPixel(ii, jj, QColor(gray, gray, gray).rgb());
}
}
What would be the best way, performance-wise, to convert a QImage to grayscale?

Since Qt 5.5, you can call QImage::convertToFormat() to convert a QImage to grayscale as follows:
QImage image = ...;
image.convertToFormat(QImage::Format_Grayscale8);

Rather than using the slow functions QImage::pixel and QImage::setPixel, use
QImage::scanline to access the data. Pixels on a scan (horizontal line ) are consecutive. Assuming you have a 32 bpp image, you can use QRgb to iterate over the scan. Finally always put the x coordinate in the inner loop. Which gives :
for (int ii = 0; ii < image.height(); ii++) {
uchar* scan = image.scanLine(ii);
int depth =4;
for (int jj = 0; jj < image.width(); jj++) {
QRgb* rgbpixel = reinterpret_cast<QRgb*>(scan + jj*depth);
int gray = qGray(*rgbpixel);
*rgbpixel = QColor(gray, gray, gray).rgba();
}
}
A quick test with an 3585 x 2386 image gave
********* Start testing of TestImage *********
Config: Using QTest library 4.7.4, Qt 4.7.4
PASS : TestImage::initTestCase()
RESULT : TestImage::grayscaleOp():
390 msecs per iteration (total: 390, iterations: 1)
PASS : TestImage::grayscaleOp()
RESULT : TestImage::grayscaleFast():
125 msecs per iteration (total: 125, iterations: 1)
PASS : TestImage::grayscaleFast()
PASS : TestImage::cleanupTestCase()
Totals: 4 passed, 0 failed, 0 skipped
********* Finished testing of TestImage *********
Source code:
testimage.h file:
#ifndef TESTIMAGE_H
#define TESTIMAGE_H
#include <QtTest/QtTest>
#include <QObject>
#include <QImage>
class TestImage : public QObject
{
Q_OBJECT
public:
explicit TestImage(QObject *parent = 0);
signals:
private slots:
void grayscaleOp();
void grayscaleFast();
private:
QImage imgop;
QImage imgfast;
};
#endif // TESTIMAGE_H
testimage.cpp file:
#include "testimage.h"
TestImage::TestImage(QObject *parent)
: QObject(parent)
, imgop("path_to_test_image.png")
, imgfast("path_to_test_image.png")
{
}
void TestImage::grayscaleOp()
{
QBENCHMARK
{
QImage& image = imgop;
for (int ii = 0; ii < image.width(); ii++) {
for (int jj = 0; jj < image.height(); jj++) {
int gray = qGray(image.pixel(ii, jj));
image.setPixel(ii, jj, QColor(gray, gray, gray).rgb());
}
}
}
}
void TestImage::grayscaleFast()
{
QBENCHMARK {
QImage& image = imgfast;
for (int ii = 0; ii < image.height(); ii++) {
uchar* scan = image.scanLine(ii);
int depth =4;
for (int jj = 0; jj < image.width(); jj++) {
QRgb* rgbpixel = reinterpret_cast<QRgb*>(scan + jj*depth);
int gray = qGray(*rgbpixel);
*rgbpixel = QColor(gray, gray, gray).rgba();
}
}
}
}
QTEST_MAIN(TestImage)
pro file:
QT += core gui
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
TARGET = QImageTest
TEMPLATE = app
CONFIG += qtestlib
SOURCES += testimage.cpp
HEADERS += testimage.h
Important note:
You already get an important performance boost just by inverting the loops. In this test case it was ~90ms.
You may use other libraries like opencv to make the grayscale conversion and then build the Qimage from an opencv buffer. I expect an even better performance improvement.

I'll post a slightly modified version of #UmNyobe's code. I just increment a pointer for the scan lines instead of calculating each pixel via an index.
// We assume the format to be RGB32!!!
Q_ASSERT(image.format() == QImage::Format_RGB32);
for (int ii = 0; ii < image.height(); ii++) {
QRgb *pixel = reinterpret_cast<QRgb*>(image.scanLine(ii));
QRgb *end = pixel + image.width();
for (; pixel != end; pixel++) {
int gray = qGray(*pixel);
*pixel = QColor(gray, gray, gray).rgb();
}
}

Internal qt class QPixmapColorizeFilter uses function grayscale that solves similar topic.
I derived following function from it, that should solve the problem.
Important part is converting image to 32-bit format, so you can consider each pixel as 32-bit value and you do not need to concern about bit alignment.
You can also use bits function directly and iterate over all pixels instead of iterating over lines and columns. With this trick you avoid multiplication performed in scanLine function.
QImage convertToGrayScale(const QImage &srcImage) {
// Convert to 32bit pixel format
QImage dstImage = srcImage.convertToFormat(srcImage.hasAlphaChannel() ?
QImage::Format_ARGB32 : QImage::Format_RGB32);
unsigned int *data = (unsigned int*)dstImage.bits();
int pixelCount = dstImage.width() * dstImage.height();
// Convert each pixel to grayscale
for(int i = 0; i < pixelCount; ++i) {
int val = qGray(*data);
*data = qRgba(val, val, val, qAlpha(*data));
++data;
}
return dstImage;
}

Related

QImage 16 bit grayscale with QQuickPaintedItem

I have unsigned 16 bit image data that I displayed by subclassing QQuickPaintedItem in Qt 5.12.3. I used QImage with Format_RGB32 and scaled the data from [0, 16383] to [0, 255] and set that as the color value for all three R,G,B. Now, I am using Qt 5.15.2 which has a QImage FORMAT_GrayScale16 that I'd like to use but for some reason the image is displayed incorrectly. The code I used to convert my unsigned 16 bit image data to QImage for both formats is shown below. The QQuickPaintedItem is a basic subclassing with drawImage(window(), my_qimage); that I pass the QImage as returned from the code below. Why is the new format not displaying correctly?
Format_RGB32 method
QImage image(image_dim, QImage::Format_RGB32);
unsigned int pixel = 0;
for (uint16_t row = 0; row < nrow; row++) {
uint *scanLine = reinterpret_cast<uint *>(image.scanLine(row));
for (uint16_t col = 0; col < ncols; col++) {
uint16_t value = xray_image.data()[pixel++]; // Get each pixel
unsigned short color_value = uint16_t((float(value) / 16383) * 255.0f); // scale [0, 255]
*scanLine++ = qRgb(int(color_value), int(color_value), int(color_value));
}
}
return image;
Format_Grayscale16 method
QImage image(image_dim, QImage::Format_Grayscale16);
unsigned int pixel = 0;
for (uint16_t row = 0; row < nrow; row++) {
// **EDIT WRONG:** uint *scanLine = reinterpret_cast<uint *>(image.scanLine(row));
uint16_t *scanLine = reinterpret_cast<uint16_t *>(image.scanLine(row));
for (uint16_t col = 0; col < ncols; col++) {
*scanLine++ = xray_image.data()[pixel++]; // Get each pixel
}
}
return image;
This worked for me (below is just fragments):
class Widget : public QWidget
{
private:
QImage m_image;
QImage m_newImage;
QGraphicsScene *m_scene;
QPixmap m_pixmap;
};
...
m_image.load("your/file/path/here");
m_newImage = m_image.convertToFormat(QImage::Format_Grayscale8);
m_pixmap.convertFromImage(m_newImage);
m_scene = new QGraphicsScene(this);
m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
ui->graphicsView->setScene(m_scene);
Based on your OP, you probably want to render it differently. graphicsView is just a QGraphicsView defined in design mode.

How can I correctly adjust the contrast of an image?

I'm trying to change the contrast of an image from an horizonal slider in Qt. I used cv::saturate_cast to do that for brightness by converting the BGR to LAB and it worked. It doesn't seem to work for adjusting the contrast, I get black or light grey images, how can I fix it? This is what I've tried: (I got the formulas from https://www.dfstudios.co.uk/articles/programming/image-programming-algorithms/image-processing-algorithms-part-5-contrast-adjustment/ )
void MainWindow::on_horizontalSlider_2_valueChanged(int value)
{
QPixmap pm =ui->label2->pixmap();
if (!pm.isNull() )
{
for (int i=0; i < image.rows; i++)
for (int j=0; j < image.cols; j++)
{
float factor = (259 * (value + 255)) / (255 * (259 - value));
image.at<cv::Vec3b>(i,j)[0] = cv::saturate_cast<uchar>( factor*(image.at<cv::Vec3b>(i,j)[0]-128)+128);
image.at<cv::Vec3b>(i,j)[1] = cv::saturate_cast<uchar>( factor*(image.at<cv::Vec3b>(i,j)[1]-128)+128);
image.at<cv::Vec3b>(i,j)[2] = cv::saturate_cast<uchar>( factor*(image.at<cv::Vec3b>(i,j)[2]-128)+128);
}
QPixmap pm =MatToPixmap(image);
QImage imageupdate= pm.toImage();
int w = ui->label2->width();
int h =ui-> label2->height();
ui->label2->setPixmap(QPixmap::fromImage(imageupdate.scaled(w,h,Qt::KeepAspectRatio)));
}
}

Setting pixel color of 8-bit grayscale image using pointer

I have this code:
QImage grayImage = image.convertToFormat(QImage::Format_Grayscale8);
int size = grayImage.width() * grayImage.height();
QRgb *data = new QRgb[size];
memmove(data, grayImage.constBits(), size * sizeof(QRgb));
QRgb *ptr = data;
QRgb *end = ptr + size;
for (; ptr < end; ++ptr) {
int gray = qGray(*ptr);
}
delete[] data;
It is based on this: https://stackoverflow.com/a/40740985/8257882
How can I set the color of a pixel using that pointer?
In addition, using qGray() and loading a "bigger" image seem to crash this.
This works:
int width = image.width();
int height = image.height();
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
image.setPixel(x, y, qRgba(0, 0, 0, 255));
}
}
But it is slow when compared to explicitly manipulating the image data.
Edit
Ok, I have this code now:
for (int y = 0; y < height; ++y) {
uchar *line = grayImage.scanLine(y);
for (int x = 0; x < width; ++x) {
int gray = qGray(line[x]);
*(line + x) = uchar(gray);
qInfo() << gray;
}
}
And it seems to work. However, when I use an image that has only black and white colors and print the gray value, black color gives me 0 and white gives 39. How can I get the gray value in a range of 0-255?
First of all you are copying too much data in this line:
memmove(data, grayImage.constBits(), size * sizeof(QRgb));
The size ob Qrgb is 4 bytes, but according to the documentation, the size of a Format_Grayscale8 pixel is only 8 bits or 1 byte. If you remove sizeof(QRgb) you should be copying the correct amount of bytes, assuming all the lines in the bitmap are consecutive (which, according to the documentation, they are not -- they are aligned to at minimum 32-bits, so you would have to account for that in size). The array data should not be of type Qrgb[size] but ucahr[size]. You can then modify data as you like. Finally, you will probably have to create a new QImage with one of the constructors that accept image bits as uchar and assign the new image to the old image:
auto newImage = QImage( data, image.width(), image.height(), QImage::Format_Grayscale8, ...);
grayImage = std::move( newImage );
But instead of copying image data, you could probably just modify grayImage directly by accessing its data through bits(), or even better, through scanLine(), maybe something like this:
int line, column;
auto pLine = grayImage.scanLine(line);
*(pLine + column) = uchar(grayValue);
EDIT:
According to scanLine documentation, the image is at least 32-bit aligned. So if your 8-bit grayScale image is 3 pixels wide, a new scan line will start every 4 bytes. If you have a 3x3 image, the total size of the memory required to hold the image pixels will be 12. The following code shows the required memory size:
int main() {
auto image = QImage(3, 3, QImage::Format_Grayscale8);
std::cout << image.bytesPerLine() * image.height() << "\n";
return 0;
}
The fill method (setting all gray values to 0xC0) could be implemented like this:
auto image = QImage(3, 3, QImage::Format_Grayscale8);
uchar gray = 0xc0;
for ( int i = 0; i < image.height(); ++i ) {
auto pLine = image.scanLine( i );
for ( int j = 0; j < image.width(); ++j )
*pLine++ = gray;
}

Convert cv::Mat to openni::VideoFrameRef

I have a kinect streaming data into a cv::Mat. I am trying to get some example code running that uses OpenNI.
Can I convert my Mat into an OpenNI format image somehow?
I just need the depth image, and after fighting with OpenNI for a long time, have given up on installing it.
I am using OpenCV 3, Visual Studio 2013, Kinect v2 for Windows.
The relevant code is:
void CDifodoCamera::loadFrame()
{
//Read the newest frame
openni::VideoFrameRef framed; //I assume I need to replace this with my Mat...
depth_ch.readFrame(&framed);
const int height = framed.getHeight();
const int width = framed.getWidth();
//Store the depth values
const openni::DepthPixel* pDepthRow = (const openni::DepthPixel*)framed.getData();
int rowSize = framed.getStrideInBytes() / sizeof(openni::DepthPixel);
for (int yc = height-1; yc >= 0; --yc)
{
const openni::DepthPixel* pDepth = pDepthRow;
for (int xc = width-1; xc >= 0; --xc, ++pDepth)
{
if (*pDepth < 4500.f)
depth_wf(yc,xc) = 0.001f*(*pDepth);
else
depth_wf(yc,xc) = 0.f;
}
pDepthRow += rowSize;
}
}
First you need to understand how your data is coming... If it is already in cv::Mat you should be receiving two images, one for the RGB information that usually is a 3 channel uchar cv::Mat and another image for the depth information that usually it is saved in a 16 bit representation in milimeters (you can not save float mat as images, but you can as yml/xml files using opencv).
Assuming you want to read and process the image that contains the depth information, you can change your code to:
void CDifodoCamera::loadFrame()
{
//Read the newest frame
//the depth image should be png since it is the one which supports 16 bits and it must have the ANYDEPTH flag
cv::Mat depth_im = cv::imread("img_name.png",CV_LOAD_IMAGE_ANYDEPTH);
const int height = depth_im.rows;
const int width = depth_im.cols;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
if (depth_im<unsigned short>(y,x) < 4500)
depth_wf(y,x) = 0.001f * (float)depth_im<unsigned short>(y,x);
else
depth_wf(y,x) = 0.f;
}
}
}
I hope this helps you. If you have any question just ask :)

OpenCV VLFeat Slic function call

I am trying to use the vl_slic_segment function of the VLFeat library using an input image stored in an OpenCV Mat. My code is compiling and running, but the output superpixel values do not make sense. Here is my code so far :
Mat bgrUChar = imread("/pathtowherever/image.jpg");
Mat bgrFloat;
bgrUChar.convertTo(bgrFloat, CV_32FC3, 1.0/255);
cv::Mat labFloat;
cvtColor(bgrFloat, labFloat, CV_BGR2Lab);
Mat labels(labFloat.size(), CV_32SC1);
vl_slic_segment(labels.ptr<vl_uint32>(),labFloat.ptr<const float>(),labFloat.cols,labFloat.rows,labFloat.channels(),30,0.1,25);
I have tried not converting it to the Lab colorspace and setting different regionSize/regularization, but the output is always very glitchy. I am able to retrieve the label values correctly, the thing is the every labels is usually scattered on a little non-contiguous area.
I think the problem is the format of my input data is wrong but I can't figure out how to send it properly to the vl_slic_segment function.
Thank you in advance!
EDIT
Thank you David, as you helped me understand, vl_slic_segment wants data ordered as [LLLLLAAAAABBBBB] whereas OpenCV is ordering its data [LABLABLABLABLAB] for the LAB color space.
In the course of my bachelor thesis I have to use VLFeat's SLIC implementation as well. You can find a short example applying VLFeat's SLIC on Lenna.png on GitHub: https://github.com/davidstutz/vlfeat-slic-example.
Maybe, a look at main.cpp will help you figuring out how to convert the images obtained by OpenCV to the right format:
// OpenCV can be used to read images.
#include <opencv2/opencv.hpp>
// The VLFeat header files need to be declared external.
extern "C" {
#include "vl/generic.h"
#include "vl/slic.h"
}
int main() {
// Read the Lenna image. The matrix 'mat' will have 3 8 bit channels
// corresponding to BGR color space.
cv::Mat mat = cv::imread("Lenna.png", CV_LOAD_IMAGE_COLOR);
// Convert image to one-dimensional array.
float* image = new float[mat.rows*mat.cols*mat.channels()];
for (int i = 0; i < mat.rows; ++i) {
for (int j = 0; j < mat.cols; ++j) {
// Assuming three channels ...
image[j + mat.cols*i + mat.cols*mat.rows*0] = mat.at<cv::Vec3b>(i, j)[0];
image[j + mat.cols*i + mat.cols*mat.rows*1] = mat.at<cv::Vec3b>(i, j)[1];
image[j + mat.cols*i + mat.cols*mat.rows*2] = mat.at<cv::Vec3b>(i, j)[2];
}
}
// The algorithm will store the final segmentation in a one-dimensional array.
vl_uint32* segmentation = new vl_uint32[mat.rows*mat.cols];
vl_size height = mat.rows;
vl_size width = mat.cols;
vl_size channels = mat.channels();
// The region size defines the number of superpixels obtained.
// Regularization describes a trade-off between the color term and the
// spatial term.
vl_size region = 30;
float regularization = 1000.;
vl_size minRegion = 10;
vl_slic_segment(segmentation, image, width, height, channels, region, regularization, minRegion);
// Convert segmentation.
int** labels = new int*[mat.rows];
for (int i = 0; i < mat.rows; ++i) {
labels[i] = new int[mat.cols];
for (int j = 0; j < mat.cols; ++j) {
labels[i][j] = (int) segmentation[j + mat.cols*i];
}
}
// Compute a contour image: this actually colors every border pixel
// red such that we get relatively thick contours.
int label = 0;
int labelTop = -1;
int labelBottom = -1;
int labelLeft = -1;
int labelRight = -1;
for (int i = 0; i < mat.rows; i++) {
for (int j = 0; j < mat.cols; j++) {
label = labels[i][j];
labelTop = label;
if (i > 0) {
labelTop = labels[i - 1][j];
}
labelBottom = label;
if (i < mat.rows - 1) {
labelBottom = labels[i + 1][j];
}
labelLeft = label;
if (j > 0) {
labelLeft = labels[i][j - 1];
}
labelRight = label;
if (j < mat.cols - 1) {
labelRight = labels[i][j + 1];
}
if (label != labelTop || label != labelBottom || label!= labelLeft || label != labelRight) {
mat.at<cv::Vec3b>(i, j)[0] = 0;
mat.at<cv::Vec3b>(i, j)[1] = 0;
mat.at<cv::Vec3b>(i, j)[2] = 255;
}
}
}
// Save the contour image.
cv::imwrite("Lenna_contours.png", mat);
return 0;
}
In addition, have a look at README.md within the GitHub repository. The following figures show some example outputs of setting the regularization to 1 (100,1000) and setting the region size to 30 (20,40).
Figure 1: Superpixel segmentation with region size set to 30 and regularization set to 1.
Figure 2: Superpixel segmentation with region size set to 30 and regularization set to 100.
Figure 3: Superpixel segmentation with region size set to 30 and regularization set to 1000.
Figure 4: Superpixel segmentation with region size set to 20 and regularization set to 1000.
Figure 5: Superpixel segmentation with region size set to 20 and regularization set to 1000.