OpenCV RGB compare - c++

I'm trying to get the number of difference between two pictures.
When I compare 2 images in gray scale, pixDiff <> 0 but when it come to RGB, pixDiff is always 0.
I used openCV's compare and also a custom loop.
Mat frame, oldFrame;
cap >> oldFrame;
if(analyseMod == MONOCHROME)
cvtColor(oldFrame, oldFrame, CV_BGR2GRAY);
nbChannels = oldFrame.channels();
while(1)
{
pixDiff = 0;
cap >> frame;
//Test diff
Mat diff;
compare(oldFrame, frame, diff, CMP_NE);
imshow("video 0", diff);
imshow("video 1", frame);
if(analyseMod == MONOCHROME)
{
cvtColor(frame, frame, CV_BGR2GRAY);
for(int i=0; i<frame.rows; i++)
for(int j=0; j<frame.cols; j++)
if(frame.at<uchar>(i,j) < oldFrame.at<uchar>(i,j) - similarPixelTolerance || frame.at<uchar>(i,j) > oldFrame.at<uchar>(i,j) + similarPixelTolerance)
pixDiff++;
}
else if(analyseMod == RGB)
{
uint8_t *f = (uint8_t *)frame.data;
uint8_t *o = (uint8_t *)oldFrame.data;
for(int i=0; i<frame.rows; i++)
{
for(int j=0; j<frame.cols; j++)
{
if(f[nbChannels*i*frame.cols + j + RED] < o[nbChannels*i*oldFrame.cols + j + RED])
pixDiff++;
}
}
}
frame.copyTo(oldFrame);
cout << pixDiff;
if(waitKey(30) >= 0) break;
}
Thx for help

I still don't get it, why are you not using your delta in the RGB case, but here is the solution for both cases, if you want to consider color channels separately. Set CN to 1 for monochrome case and to 3 for RGB case.
const int CN = 3; // 3 for RGB, 1 for monochrome
uint8_t *f = frame.ptr<uint8_t>();
uint8_t *o = oldFrame.ptr<uint8_t>();
for(int i = 0; i < frame.rows; ++i)
{
for(int j = 0; j < frame.cols; ++j)
{
for (int c = 0; c < CN; ++c)
{
if (abs(*f - *o) > similarPixelTolerance) ++pxDiff;
++f, ++o;
}
}
}
It is way more efficient to access pixels in this way than to call at for each pixel. The only possible problem is if you have some padding in your images, but by default OpenCV is using continuous allocation.

Related

Opencv only process the parts of image

I want to make a negative transformation for the image which is a very simple program.
But when I run the program. I want to transform all of the pixels in the image, but only 1/3 parts of that are processed. I don't make sure where is wrong. all the code I followed the book. But the result is different.
I think there is something wrong about the columns, but when I change the value of I.cols in negativeImage function with the actual value of image. the output still keep the same. only 1/3 parts of image are processed. If I 3 times the I.cols all of the pixels in the iamge could be processed.
vector<uchar> getNegativeLUT() {
vector<uchar> LUT(256, 0);
for (int i = 0; i < 256; ++i)
LUT[i] = (uchar)(255 - i);
return LUT;
}
void negativeImage(Mat& I) {
vector<uchar> LUT = getNegativeLUT();
for (int i = 0; i < I.rows; ++i) {
for (int j = 0; j < I.cols; ++j) {
I.at<uchar>(i, j) = LUT[I.at<uchar>(i, j)];
//stack overflow
}
}
}
int main() {
Mat image = imread("1.png");
Mat processed_image2 = image.clone();
negativeImage(processed_image2);
printf("%d", image.cols);
imshow("Input Image", image);
imshow("Negative Image", processed_image2);
waitKey(0);
return 0;
}
Output Image
You need to put correct type with at<> operator. Your PNG image has to be converted to 8UC1 to then use uchar type to access each pixel. I suppose your image has 3 channels, so you only iterate over 1/3 of the image. Also, I suggest you to use ptr<> operator in rows loop and then access to pixel as an array.
Mat M;
cvtColor(I, M, CV_BGR2GRAY);
// M is CV_8UC1 type
for(int i = 0; i < M.rows; i++)
{
uchar* p = M.ptr<uchar>(i);
for(int j = 0; j < I.cols; j++)
{
p[j] = LUT[p[j]];
}
}
EDIT: you should use cv::LUT instead of doing it yourself.
cv::Mat lut(1, 256, CV_8UC1);
for( int i = 0; i < 256; ++i)
{
lut.at<uchar>(0,i) = uchar(255-i);
}
cv::LUT(M, lut, result);

OpenCV GrabCut Mask

I have utilised the OpenCV GrabCut functionality to perform an image segmentation. When viewing the segmented image as per the code below, the segmentation is reasonable/correct. However, when looking at(at attempting to use) the segmrntation mask values, I am getting some very large numbers, and not the enumerated values one would expect from the cv::GrabCutClasses enum.
void doGrabCut(){
Vector2i imgDims = getImageDims();
//Wite image to OpenCV Mat.
const Vector4u *rgb = getRGB();
cv::Mat rgbMat(imgDims.height, imgDims.width, CV_8UC3);
for (int i = 0; i < imgDims.height; i++) {
for (int j = 0; j < imgDims.width; j++) {
int idx = i * imgDims.width + j;
rgbMat.ptr<cv::Vec3b>(i)[j][2] = rgb[idx].x;
rgbMat.ptr<cv::Vec3b>(i)[j][1] = rgb[idx].y;
rgbMat.ptr<cv::Vec3b>(i)[j][0] = rgb[idx].z;
}
}
//Do graph cut.
cv::Mat res, fgModel, bgModel;
cv::Rect bb(bb_begin.x, bb_begin.y, bb_end.x - bb_begin.x, bb_end.y - bb_begin.y);
cv::grabCut(rgbMat, res, bb, bgModel, fgModel, 10, cv::GC_INIT_WITH_RECT);
cv::compare(res, cv::GC_PR_FGD, res, cv::CMP_EQ);
//Write mask.
Vector4u *maskPtr = getMask();//uchar
for (int i = 0; i < imgDims.height; i++) {
for (int j = 0; j < imgDims.width; j++) {
cv::GrabCutClasses classification = res.at<cv::GrabCutClasses>(i, j);
int idx = i * imgDims.width + j;
std::cout << classification << std::endl;//Strange numbers here.
maskPtr[idx].x = (classification == cv::GC_PR_FGD) ? 255 : 0;//This always evaluates to 0.
}
}
cv::Mat foreground(rgbMat.size(), CV_8UC3, cv::Scalar(255, 255, 255));
rgbMat.copyTo(foreground, res);
cv::imshow("GC Output", foreground);
}
Why would one get numbers outside the enumeration when the segmentation is qualitatively correct?
I doubt on your //Write mask. step, why do you re-iterate the res and modify maskPtr as maskPtr[idx].x = (classification == cv::GC_PR_FGD) ? 255 : 0;, Basically you already have a single channel Binary image stored in the res variable, the cv::compare() returns a binary image
However if you still want to debug the values by iteration then you should use the standard technique for iterating a single channel image as:
for (int i = 0; i < m.rows; i++) {
for (int j = 0; j < m.cols; j++) {
uchar classification = res.at<uchar>(i, j);
std::cout << int(classification) << ", ";
}
}
As you are iterating a single channel mat you must use res.at<uchar>(i, j) and not res.at<cv::GrabCutClasses>.

How to set pixel value of a cv::Mat1b?

I have copied a grayscale image into a cv::Mat1b, and I want to loop through each pixel and read and change its value. How can I do that?
My code looks like this :
cv::Mat1b newImg;
grayImg.copyTo(newImg);
for (int i = 0; i < grayImg.rows; i++) {
for (int j = 0; i < grayImg.cols; j++) {
int pixelValue = static_cast<int>(newImg.at<uchar>(i, j));
if(pixelValue > thresh)
newImg.at<int>(i,j) = 0;
else
newImg.at<int>(i, j) = 255;
}
}
But in the assignments (inside of if and else), I get the error Access violation writing location.
How do I read and write specific pixels correctly?
Thanks !
Edit
Thanks to #Miki and #Micka, this is how I solved it :
for (int i = 0; i < newImg.rows; i++) {
for (int j = 0; j < newImg.cols; j++) {
// read :
cv::Scalar intensity1 = newImg.at<uchar>(i,j);
int intensity = intensity1.val[0];
// write :
newImg(i, j) = 255;
}
}
newImg.at<int>(i,j)
should be
newImg.at<uchar>(i,j)
Because cv::Mat1b is of uchar type
i suggest :
cv::Mat1b newImg;
newImg = grayImg > thresh ;
or
cv::Mat1b newImg;
newImg = grayImg < thresh ;
also look at the OpenCV Tutorials to know how to go through each and every pixel of an image

Implementation of 'imquantize' function in opencv

I am trying to implement the Matlab function imquantize using opencv. Which opencv thresholding function I should use to implement Matlab function multithresh? Once thresholding has been done how do I label the pixels according to the threshold? Is this the right way to implement imquantize ? Are there any other function's I should include in the code?
There is an implementation based on OpenCV here, where you should probably get the idea:
cv::Mat
imquantize(const cv::Mat& in, const arma::fvec& thresholds) {
BOOST_ASSERT_MSG(cv::DataType<float>::type == in.type(), "input is not of type float");
cv::Mat index(in.size(), in.type(), cv::Scalar::all(1));
for (int i = 0; i < thresholds.size() ; i++) {
cv::Mat temp = (in > thresholds(i)) / 255;
temp.convertTo(temp, cv::DataType<float>::type);
index += temp;
}
return index;
}
Updated: thresholds are the vector of the float threshold values (uniform distributed to # of levels that you want to quantize within [0, 1]). Check the code snippet of how it is used:
const float step = 1./levels[i];
arma::fvec thresh = arma::linspace<arma::fvec>(step, 1.-step, levels[i]-1);
channels[i] = imquantize(channels[i], thresh);
I suppose you are looking for something like this
/*function imquantize
* 'inputImage' is the input image.
* 'levels' is an array of threholds
* 'quantizedImage' is the reurned image
* with quantized levels.
*/
Mat imquantize(Mat inputImage, vector<vector<int> > levels)
{
//initialise output label matrix
Mat quantizedImage(inputImage.size(), inputImage.type(), Scalar::all(1));
//Apply labels to the pixels according to the thresholds
for (int i = 0; i < inputImage.cols; i++)
{
for (int j = 0; j < inputImage.rows; j++)
{
// Check if image is grayscale or BGR
if(levels.size() == 1)
{
for (int k = 0; k < levels[0].size(); k++)
{
// if pixel < lowest threshold , then assign 0
if(inputImage.at<uchar>(j,i) <= levels[0][0])
{
quantizedImage.at<uchar>(j,i) = 0;
}
// if pixel > highest threshold , then assign 255
else if(inputImage.at<uchar>(j,i) >= levels[0][levels[0].size()-1])
{
quantizedImage.at<uchar>(j,i) = 255;
}
// Check the level borders for pixel and assign the corresponding
// upper bound quanta to the pixel
else
{
if(levels[0][k] < inputImage.at<uchar>(j,i) && inputImage.at<uchar>(j,i) <= levels[0][k+1])
{
quantizedImage.at<uchar>(j,i) = (k+1)*255/(levels[0].size());
}
}
}
}
else
{
Vec3b pair = inputImage.at<Vec3b>(j,i);
// Processing the Blue Channel
for (int k = 0; k < levels[0].size(); k++)
{
if( pair.val[0] <= levels[0][0])
{
quantizedImage.at<Vec3b>(j,i)[0] = 0;
}
else if( pair.val[0] >= levels[0][levels.size()-1])
{
quantizedImage.at<Vec3b>(j,i)[0] = 255;
}
else
{
if(levels[0][k] < pair.val[0] && pair.val[0] <= levels[0][k+1])
{
quantizedImage.at<Vec3b>(j,i)[0] = (k+1)*255/(levels[0].size());
}
}
}
// Processing the Green Channel
for (int k = 0; k < levels[1].size(); k++)
{
if( pair.val[1] <= levels[1][0])
{
quantizedImage.at<Vec3b>(j,i)[1] = 0;
}
else if( pair.val[1] >= levels[1][levels.size()-1])
{
quantizedImage.at<Vec3b>(j,i)[1] = 255;
}
else
{
if(levels[1][k] < pair.val[1] && pair.val[1] <= levels[1][k+1])
{
quantizedImage.at<Vec3b>(j,i)[1] = (k+1)*255/(levels[1].size());
}
}
}
// Processing the Red Channel
for (int k = 0; k < levels[2].size(); k++)
{
if( pair.val[2] <= levels[2][0])
{
quantizedImage.at<Vec3b>(j,i)[2] = 0;
}
else if( pair.val[2] >= levels[2][levels.size()-1])
{
quantizedImage.at<Vec3b>(j,i)[2] = 255;
}
else
{
if(levels[2][k] < pair.val[2] && pair.val[2] <= levels[2][k+1])
{
quantizedImage.at<Vec3b>(j,i)[2] = (k+1)*255/(levels[2].size());
}
}
}
}
}
}
return quantizedImage;
}
In this function the input had to be an Mat::Image and a 2D vector which can have different levels for different channels.

How can I convert a Mat_ to Mat in the following code?

below is a snippet from the opencv SVM tutorial at this link. And in that snippet is this line of code ' Mat sampleMat = (Mat_(1,2) << j,i);'. Instead of using the Mat_ template, I would need to use a regular Mat object. I was hoping someone can show me how to convert the Mat_ to a Mat in the previous line.
I tried Mat sampleMat = (Mat(1,2, CV_32FC1) << j,i); //but get a long page of errors
I tried Mat sampleMat = Mat(1,2, CV_32FC1) << j,i; //same, long page of errors
I just need the code at the link at the top of the page to run without using the Mat_ and only use a Mat in its place...if someone can show me how to write that line I'd appreciate it.
for (int i = 0; i < image.rows; ++i)
for (int j = 0; j < image.cols; ++j)
{
Mat sampleMat = (Mat_<float>(1,2) << j,i);
float response = SVM.predict(sampleMat);
if (response == 1)
image.at<Vec3b>(i,j) = green;
else if (response == -1)
image.at<Vec3b>(i,j) = blue;
}
Edit: Trying to run like below but getting errors
Vec3b green(0,255,0), blue (255,0,0);
// Show the decision regions given by the SVM
for (int i = 0; i < image.rows; ++i)
for (int j = 0; j < image.cols; ++j)
{
Mat sampleMat(1, 2, CV_32F);
float * const pmat = sampleMat.ptr<float>();
pmat[0] = i;
pmat[1] = j;
float response = SVM.predict(sampleMat);
if (response == 1)
pmat[0] = green;
pmat[1] = green;
else if (response == -1)
pmat[0] = blue;
pmat[1] = blue;
}
I figured you'd know enough so I didn't need the errors=)
Set the values directly:
Mat sampleMat(1, 2, CV_32F);
sampleMat.at<float>(0,1) = j;
sampleMat.at<float>(0,2) = i;
or
Mat sampleMat(1, 2, CV_32F);
float * const pmat = sampleMat.ptr<float>();
pmat[0] = j;
pmat[1] = i;
Addendum:
Seeing your loop, you could make it a bit more efficient in the case that SVM.predict doesn't modify sampleMat. You can set the image row just once per row, instead of doing it all the time:
for (int i = 0; i < image.rows; ++i)
{
Mat sampleMat(1, 2, CV_32F);
sampleMat.at<float>(0, 2) = i;
for (int j = 0; j < image.cols; ++j)
{
sampleMat.at<float>(0, 1) = j;
...
}
}