How to compare two RGB images in Qt? - c++

I need to compare two color RBG images and get a result images of difference pixel by pixel. Is there any idea how can I do that in qt ?
I would appreciate for any help or advice.

Here is an alternative based on this QtForum question:
void substract(const QImage &left, const QImage &rigth, QImage &result)
{
int w=min(left.width(), rigth.width());
int h=min(left.height(),rigth.height();
w=min(w, result.width());
h=min(h, result.height();
//<-This ensures that you work only at the intersection of images areas
for(int i=0;i<h;i++){
QRgb *rgbLeft=(QRgb*)left.constScanLine(i);
QRgb *rgbRigth=(QRgb*)rigth.constScanLine(i);
QRgb *rgbResult=(QRgb*)result.constScanLine(i);
for(int j=0;j<w;j++){
rgbResult[j] = rgbLeft[j]-rgbRigth[j];
}
}
}

First, an RGB image is a 3-dimensional matrix containing the 3 channels (R,G,B)
To achieve the difference you can simply subtract the matrixes.
If you're using OpenCv consider the code below, otherwise you can traverse the matrixes and subtract each position separately.
#include <cv.h>
#include <highgui.h>
using namespace cv;
Mat img = imread("...");
Mat img2 = imread("...");
Mat diff_img = img - img2;

Using QImage you can iterate on pixel level, and simply output the RBG differences into a third image.
QRgb QImage::pixel(int x, int y) const
void QImage::setPixelColor(int x, int y, const QColor &color)
Keep in mind to iterate row sequentially for optimal performance. Meaning that the row should be your inner loop. A lot of the time people instinctively do the opposite, probably because most people prioritize width over height, therefore putting the row as the outer loop.

bool ImagesAreSimilar(QImage *img1,QImage *img2)
{
if (img1->isNull()||img2->isNull())
{
return false ;
}
if (img1->height()!=img2->height()) {
return false ;
}
if (img1->width()!=img2->width()) {
return false ;
}
auto pixel1 = img1->bits();
auto pixel2 = img2->bits();
bool similar=true;
for (int y = 0; y < img1->height(); y++)
{
for (int x = 0; x < img1->width(); x++)
{
if ( (pixel1[0]!=pixel2[0])||
(pixel1[1]!=pixel2[1])||
(pixel1[2]!=pixel2[2])||
(pixel1[3]!=pixel2[3])) {
return false ;
}
pixel1 += 4;
pixel2 += 4;
}
}
return similar;
}

Related

How to get negative of each channel (Red, Green, Blue) in RGB image?

I am trying to get negative of negative of each channel (Red, Green, Blue) in RGB image.
Simply put :
If value of red channel in an RGB image is 'r', I am looking to get r'=255-r.
Repeat this process for green and blue as well.
Finally merge r',g' and b' to display the image.
Below is the code I have written but it gives:
Process terminated with status -1073741819
as output. Also please see detailed output.
#include<iostream>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
using namespace cv;
using namespace std;
//#include<filesystem>
int main()
{
Mat myImage;//declaring a matrix to load the image//
Mat different_Channels[3];//declaring a matrix with three channels//
String imgPath = "C:/Users/tusha/Desktop/ResearchPractise/testNegativeImage/RGB.jpg";
myImage= imread(imgPath,IMREAD_UNCHANGED);//loading the image in myImage matrix//
split(myImage, different_Channels);//splitting images into 3 different channels//
Mat b = different_Channels[0];//loading blue channels//
Mat g = different_Channels[1];//loading green channels//
Mat r = different_Channels[2];//loading red channels//
//for red channel
for (int y = 0; y < myImage.rows; y++) {
for (int x = 0; x < myImage.cols; x++) {
//Retrieving the values of a pixel
int pixelr = r.at<uchar>(x,y);
pixelr = 255-pixelr;
r.at<uchar>(x,y)=pixelr;
}
}
//for green channel
for (int y = 0; y < myImage.rows; y++) {
for (int x = 0; x < myImage.cols; x++) {
//Retrieving the values of a pixel
int pixelg = g.at<uchar>(x,y);
pixelg = 255-pixelg;
g.at<uchar>(x,y)=pixelg;
}
}
//for blue channel
for (int y = 0; y < myImage.rows; y++) {
for (int x = 0; x < myImage.cols; x++) {
//Retrieving the values of a pixel
int pixelb = b.at<uchar>(x,y);
pixelb = 255-pixelb;
b.at<uchar>(x,y)=pixelb;
}
}
vector<Mat> channels;
channels.push_back(r);
channels.push_back(g);
channels.push_back(b);
Mat negImage;
merge(channels,negImage);
cout<<"Negative image";
namedWindow("Negative",WINDOW_NORMAL);
imshow("Negative",negImage);
return 0;
}
The main issue:
As you can see in the cv::Mat::at documentation, you should
first pass the col (i.e. your y coordinate), and then the row (i.e. your x coordinate).
Therefore all the 6 lines referring to:
at<uchar>(x,y)
Should be changed to:
at<uchar>(y,x)
Assuming your image is not a square, inverting the coordinated as you did is not only semantically wrong but will also result in access via invalid indices (causing access violation).
Regarding the result display:
You can also remove the WINDOW_NORMAL argument you pass to cv::namedWindow in order to see the result image in its actual size.
In addition you should add a call to cv::waitKey (e.g. cv::waitKey(0);) after it to keep the window opened and process GUI events.
Note that using cv::Mat::at to traverse all pixels is quite inefficient. A more efficient approach would be to use cv::Mat::ptr per row to get a pointer to the row data, and then traverse it using pointer arithmetics.
A side note: better to avoid using namespace std - see here Why is "using namespace std;" considered bad practice?. A similar argument can be applied to using namespace cv;.

How can I represent colormap data in QwT from cv::Mat?

I'm developing an application in c++ with Qt and Qwt framework for scientific plots. I have matrix data stored as cv::Mat, representing an image with scalar data (MxN), which needs to be visualized as a colormap.
With OpenCV it is performed using cv::applyColorMap(img,cm_img,cv::COLORMAP_JET) and cv::imshow("img name", img), as described here
I have tried converting cv::Mat to QImage, as described here and here but it seems not to be working properly. When I try to show the resulting images, it doesn't make sense.
From Qwt, there are some classes that looks interesting for that matter: QwtMatrixRasterData, QwtPlotSpectrogram or QwtPlotRasterItem.
What I need as final output would be something like this.
Given a matrix (MxN) with double values, calling something like imshow I get an colormap image like this
We came around using QCustomPlot, sending it a QVector<double>.
The idea is to create the QVector from cv::Mat:
QVector<double> cvMatToQVector(const cv::Mat& mat) {
QVector<double> image;
auto img_ptr = img.begin<double>();
for (; img_ptr < img.end(); img_ptr++) {
image.append(*img_ptr) = element;
}
return image;
}
Then we create a QCPColorMap* colorMap and populate it with QVector vec data:
for (int yIndex = 0; yIndex < col; ++yIndex) {
y_col = yIndex * col;
for (int xIndex = 0; xIndex < row; ++xIndex) {
z = vec.at(xIndex + y_col);
colorMap->data()->setCell(xIndex, yIndex, z);
}
}

Manipulating pixels of a cv::MAT just doesn't take effect

The following code is just supposed to load an image, fill it with a constant value and save it again.
Of course that doesn't have a purpose yet, but still it just doesn't work.
I can read the pixel values in the loop, but all changes are without effect and saves the file as it was loaded.
Think I followed the "efficient way" here accurately: http://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html
int main()
{
Mat im = imread("C:\\folder\\input.jpg");
int channels = im.channels();
int pixels = im.cols * channels;
if (!im.isContinuous())
{ return 0; } // Just to show that I've thought of that. It never exits here.
uchar* f = im.ptr<uchar>(0);
for (int i = 0; i < pixels; i++)
{
f[i] = (uchar)100;
}
imwrite("C:\\folder\\output.jpg", im);
return 0;
}
Normal cv functions like cvtColor() are taking effect as expected.
Are the changes through the array happening on a buffer somehow?
Huge thanks in advance!
The problem is that you are not looking at all pixels in the image. Your code only looks at im.cols*im.channels() which is a relatively small number as compared to the size of the image (im.cols*im.rows*im.channels()). When used in the for loop using the pointer, it only sets a value for couple of rows in an image ( if you look closely you will notice the saved image will have these set ).
Below is the corrected code:
int main()
{
Mat im = imread("C:\\folder\\input.jpg");
int channels = im.channels();
int pixels = im.cols * im.rows * channels;
if (!im.isContinuous())
{ return 0; } // Just to show that I've thought of that. It never exits here.
uchar* f = im.ptr<uchar>(0);
for (int i = 0; i < pixels; i++)
{
f[i] = (uchar)100;
}
imwrite("C:\\folder\\output.jpg", im);
return 0;
}

Convert cv::Mat to openni::VideoFrameRef

I have a kinect streaming data into a cv::Mat. I am trying to get some example code running that uses OpenNI.
Can I convert my Mat into an OpenNI format image somehow?
I just need the depth image, and after fighting with OpenNI for a long time, have given up on installing it.
I am using OpenCV 3, Visual Studio 2013, Kinect v2 for Windows.
The relevant code is:
void CDifodoCamera::loadFrame()
{
//Read the newest frame
openni::VideoFrameRef framed; //I assume I need to replace this with my Mat...
depth_ch.readFrame(&framed);
const int height = framed.getHeight();
const int width = framed.getWidth();
//Store the depth values
const openni::DepthPixel* pDepthRow = (const openni::DepthPixel*)framed.getData();
int rowSize = framed.getStrideInBytes() / sizeof(openni::DepthPixel);
for (int yc = height-1; yc >= 0; --yc)
{
const openni::DepthPixel* pDepth = pDepthRow;
for (int xc = width-1; xc >= 0; --xc, ++pDepth)
{
if (*pDepth < 4500.f)
depth_wf(yc,xc) = 0.001f*(*pDepth);
else
depth_wf(yc,xc) = 0.f;
}
pDepthRow += rowSize;
}
}
First you need to understand how your data is coming... If it is already in cv::Mat you should be receiving two images, one for the RGB information that usually is a 3 channel uchar cv::Mat and another image for the depth information that usually it is saved in a 16 bit representation in milimeters (you can not save float mat as images, but you can as yml/xml files using opencv).
Assuming you want to read and process the image that contains the depth information, you can change your code to:
void CDifodoCamera::loadFrame()
{
//Read the newest frame
//the depth image should be png since it is the one which supports 16 bits and it must have the ANYDEPTH flag
cv::Mat depth_im = cv::imread("img_name.png",CV_LOAD_IMAGE_ANYDEPTH);
const int height = depth_im.rows;
const int width = depth_im.cols;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
if (depth_im<unsigned short>(y,x) < 4500)
depth_wf(y,x) = 0.001f * (float)depth_im<unsigned short>(y,x);
else
depth_wf(y,x) = 0.f;
}
}
}
I hope this helps you. If you have any question just ask :)

cv::Mat to QImage strange behavior

I'm using the code suggested in ( how to convert an opencv cv::Mat to qimage ) to display a cv::Mat in my Qt application. However, I'm getting strange results. The black parts are displayed as black, but all other values are inverted.
Conversion code:
QImage ImgConvert::matToQImage(Mat_<double> src)
{
double scale = 255.0;
QImage dest(src.cols, src.rows, QImage::Format_ARGB32);
for (int y = 0; y < src.rows; ++y) {
const double *srcrow = src[y];
QRgb *destrow = (QRgb*)dest.scanLine(y);
for (int x = 0; x < src.cols; ++x) {
unsigned int color = srcrow[x] * scale;
destrow[x] = qRgba(color, color, color, 255);
}
}
return dest;
}
Display code:
void MainWindow::redraw()
{
static QImage image = ImgConvert::matToQImage(im);
static QGraphicsPixmapItem item( QPixmap::fromImage(image));
static QGraphicsScene* scene = new QGraphicsScene;
scene->addItem(&item);
ui->graphicsView->setScene(scene);
ui->graphicsView->repaint();
}
Right now I'm using if(color>0) color = 255-color; to correct for this effect, but I'd much rather understand where it's coming from.
Also, a second mini-question: if I remove the static declarations in redraw(), the image gets removed from memory immediately when the method exits. Is this the best way to fix this, and am I going to have any unintended side effects if I display multiple frames?
I don't know. Setting an array first for me sounds like a cleaner way, see https://stackoverflow.com/a/3387400/1705967 , that could give you ideas.
Although I also use Ypnos's solution with a great success on color images. :)
Ah, and as for the second question, don't worry about the QPixmap. It makes the image data private (clones when necessary) as I have experienced so you won't overwrite it by mistake.
In case anyone is having this problem, I quickly and dirtily fixed it by subtracting the pixel value to 256:
QImage ImgConvert::matToQImage(Mat_<double> src)
{
double scale = 255.0;
QImage dest(src.cols, src.rows, QImage::Format_ARGB32);
for (int y = 0; y < src.rows; ++y) {
const double *srcrow = src[y];
QRgb *destrow = (QRgb*)dest.scanLine(y);
for (int x = 0; x < src.cols; ++x) {
unsigned int color = 256 - (srcrow[x] * scale);
destrow[x] = qRgba(color, color, color, 255);
}
}
return dest;
}
This will slightly corrupt the image, though, modifying by 1 its bright. My purpose was visualizing so the difference was negligible to the eye, however for certain applications in image processing this corruption might be critical. I could not find why was this happening and as I was in a hurry I did not look any further.