Understanding nested array c++ in image scanning context - c++

I came across this sample code on openCV library. What does the line p[j] = table[p[j]] do? I have come across multi dimensional arrays but not something like this before.
Mat& ScanImageAndReduceC(Mat& I, const uchar* const table)
{
// accept only char type matrices
CV_Assert(I.depth() == CV_8U);
int channels = I.channels();
int nRows = I.rows;
int nCols = I.cols * channels;
if (I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}
int i,j;
uchar* p;
for( i = 0; i < nRows; ++i)
{
p = I.ptr<uchar>(i);
for ( j = 0; j < nCols; ++j)
{
p[j] = table[p[j]];
}
}
return I;
}

It is doing color replacement by using a table where each pixel intensity maps to some other value. Commonly used for techniques like color grading, histogram adjustment, or even thresholding.
Here, the table contains unsigned char values and is being indexed by the value of the pixel. The pixel's intensity p[i] is used as an index into the table, and the value at that index is then written to that pixel, replacing its original value.

It is a lookup table conversion.
The pixels of image(I) would be converted by means of table.
For example, the pixel with value 100 would be changed to 10 if table[100]=10.
Your sample code is introduced in OpenCV tutorial which is well explained of what the code does.
https://docs.opencv.org/master/db/da5/tutorial_how_to_scan_images.html

Related

How to save to vector coordinates of white pixels?

I want to loop through a binarized cv::Mat and save all coordinates of pixels with a value of 255.
cv::Mat bin;
std::vector<cv::Point2i> binVec;
int h = 0;
int white = 254; //Just for comparison with pointer of Matrix value
for (int i = 0; i < bin.rows; i++, h++) {
for (int j = 0; j < bin.cols; j++, h++) {
int* p = bin.ptr<int>(h); //Pointer to bin Data, should loop through Matrix
if (p >= &white) //If a white pixel has been found, push i and j in binVec
binVec.push_back(cv::Point2i(i, j));
}
}
This snippet is not working, and I don't know why.
Exception thrown at 0x76C6C42D in example.exe: Microsoft C++ exception: cv::Exception at memory location 0x0019E4F4.
Unhandled exception at 0x76C6C42D in example.exe: Microsoft C++ exception: cv::Exception at memory location 0x0019E4F4.
So how can I count h and let the pointer work?
You can avoid to scan the image. To save the coordinates of all white pixels in a vector you can do like:
Mat bin;
// fill bin with some value
std::vector<Point> binVec;
findNonZero(bin == 255, binVec);
You can use Point instead of Point2i, since they are the same:
typedef Point2i Point;
If you really want to use a for loop, you should do like:
const uchar white = 255;
for (int r = 0; r < bin.rows; ++r)
{
uchar* ptr = bin.ptr<uchar>(r);
for(int c = 0; c < bin.cols; ++c)
{
if (ptr[c] == 255) {
binVec.push_back(Point(c,r));
}
}
}
Remember that:
you binary image is probably CV_8UC1, and not a CV_32SC1, so you should use uchar instead of int.
bin.ptr<...>(i) gives you a pointer to the start of the i-th row, so you should take it out of the inner loop.
you should compare the values, not the address.
Point take as parameters x (cols) and y (rows), while you are passing i (rows) and j (cols). So you need to swap them.
this loop can be further optimized, but for your task I strongly recommend the findNonZero approach, so I don't show it here.
You should only increment h in the inner loop
You should compare the value pointed at by p with h, not compare p with the address of h.
So
cv::Mat bin;
std::vector<cv::Point2i> binVec;
int h = 0;
int white = 254; //Just for comparison with pointer of Matrix value
for (int i = 0; i < bin.rows; i++) {
for (int j = 0; j < bin.cols; j++) {
int* p = bin.ptr<int>(h++); //Pointer to bin Data, should loop through Matrix
if (*p >= white) //If a white pixel has been found, push i and j in binVec
binVec.push_back(cv::Point2i(i, j));
}
}

Speeding up access to a array with pointers in C++

I am trying to make a fast image threshold function. Currently what I do is:
void threshold(const cv::Mat &input, cv::Mat &output, uchar threshold) {
int rows = input.rows;
int cols = input.cols;
// cv::Mat for result
output.create(rows, cols, CV_8U);
if(input.isContinuous()) { //we have to make sure that we are dealing with a continues memory chunk
const uchar* p;
for (int r = 0; r < rows; ++r) {
p = input.ptr<uchar>(r);
for (int c = 0; c < cols; ++c) {
if(p[c] >= threshold)
//how to access output faster??
output.at<uchar>(r,c) = 255;
else
output.at<uchar>(r,c) = 0;
}
}
}
}
I know that the at() function is quite slow. How can I set the output faster, or in other words how to relate the pointer which I get from the input to the output?
You are thinking of at as the C++ standard library documents it for a few containers, performing a range check and throwing if out of bounds, however this is not the standard library but OpenCV.
According to the cv::Mat::at documentation:
The template methods return a reference to the specified array element. For the sake of higher performance, the index range checks are only performed in the Debug configuration.
So there's no range check as you may be thinking.
Comparing both cv::Mat::at and cv::Mat::ptr in the source code we can see they are almost identical.
So cv::Mat::ptr<>(row) is as expensive as
return (_Tp*)(data + step.p[0] * y);
While cv::Mat::at<>(row, column) is as expensive as:
return ((_Tp*)(data + step.p[0] * i0))[i1];
You might want to take cv::Mat::ptr directly instead of calling cv::Mat::at every column to avoid further repetition of the data + step.p[0] * i0 operation, doing [i1] by yourself.
So you would do:
/* output.create and stuff */
const uchar* p, o;
for (int r = 0; r < rows; ++r) {
p = input.ptr<uchar>(r);
o = output.ptr<uchar>(r); // <-----
for (int c = 0; c < cols; ++c) {
if(p[c] >= threshold)
o[c] = 255;
else
o[c] = 0;
}
}
As a side note you don't and shouldn't check for cv::Mat::isContinuous here, the gaps are from one row to another, you are taking pointers to a single row, so you don't need to deal with the matrix gaps.

mean of an image opencv?

I'm not certain about my mean function. In Matlab, the mean of my image is 135.3565 by using mean2; however, my function gives 140.014 and OpenCV built-in cv::mean gives me [137.67, 152.467, 115.933, 0]. This is my code.
double _mean(const cv::Mat &image)
{
double N = image.rows * image.cols;
double mean;
for (int rows = 0; rows < image.rows; ++rows)
{
for (int cols = 0; cols < image.cols; ++cols)
{
mean += (float)image.at<uchar>(rows, cols);
}
}
mean /= N;
return mean;
}
My guess is that you are feeding one type of image to Matlab and another type to your algoritm and to the opencv built-in function.
The mean2 function of Matlab takes a 2D image (grayscale) . Your function assumes that the image is 2D matrix of unsigned chars (grayscale too), and when you do this:
mean += (float)image.at<uchar>(rows, cols);
and you pass a color image to the function, an incorrect value is retrieved. Try to convert your image to grayscale before passing to your function and compare the result with Matlab.
For a color image, modify your function to this:
double _mean(const cv::Mat &image)
{
double N = image.rows * image.cols * image.channels();
double mean;
for (int rows = 0; rows < image.rows; ++rows)
{
for (int cols = 0; cols < image.cols; ++cols)
{
for(int channels = 0; channels < image.channels(); ++channels)
{
mean += image.at<cv::Vec3b>(rows, cols)[channels];
}
}
}
mean /= N;
return mean;
}
and in Matlab compute the mean with
mean(image(:))
which will vectorize your image before compute the mean. Compare the results.
The opencv function computes the mean of each channel of the image separately, so the result is a vector of the means of each channel.
I hope this will help!

Histogram not accurate for dark images

I'm using the next algorithm to calculate the histogram from a YUV420sp image. Seems to work but the result is not 100% accurate for a fully dark image. When the image is dark I would expect to have on the left side of the histogram a high pick showing that the image is too dark, but the algorithm in such scenario shows instead a flat line, no pick. On the other light scenarios the histogram seems to be accurate.
void calculateHistogram(const unsigned char* yuv420sp, const int yuvWidth, const int yuvHeight, const int histogramControlHeight, int* outHistogramData)
{
const int BINS = 256;
// Clear the output
memset(outHistogramData, 0, BINS * sizeof(int));
// Get YUV brightness values
const int totalPixels = yuvWidth * yuvHeight;
for (int index = 0; index < totalPixels; index++)
{
char brightness = yuv420sp[index];
outHistogramData[brightness]++;
}
// Get the maximum brightness
int maxBrightness = 0;
for (int index = 0; index < BINS; index++)
{
if (outHistogramData[index] > maxBrightness)
{
maxBrightness = outHistogramData[index];
}
}
// Normalize to fit the UI control height
const int maxNormalized = BINS * histogramControlHeight / maxBrightness;
for(int index = 0; index < BINS; index++)
{
outHistogramData[index] = (outHistogramData[index] * maxNormalized) >> 8;
}
}
[SOLVED by galop1n] Though Galop1n implementation is much nicer I'm updating this one with the corrections in case is of use to anyone.
Changes:
1) Reading brightness values into an unsigned char instead of a char.
2) Placed UI normalization division into the normalization loop.
void calculateHistogram(const unsigned char* yuv420sp, const int yuvWidth, const int yuvHeight, const int histogramCanvasHeight, int* outHistogramData)
{
const int BINS = 256;
// Clear the output
memset(outHistogramData, 0, BINS * sizeof(int));
// Get YUV brightness values
const int totalPixels = yuvWidth * yuvHeight;
for (int index = 0; index < totalPixels; index++)
{
unsigned char brightness = yuv420sp[index];
outHistogramData[brightness]++;
}
// Get the maximum brightness
int maxBrightness = 0;
for (int index = 0; index < BINS; index++)
{
if (outHistogramData[index] > maxBrightness)
{
maxBrightness = outHistogramData[index];
}
}
// Normalize to fit the UI control height
for(int index = 0; index < BINS; index++)
{
outHistogramData[index] = outHistogramData[index] * histogramCanvasHeight / maxBrightness;
}
}
There is at least two bugs in your implementation.
The indexing by the brightness because of using a temporary of type signed char.
The final normalization result can be influence by the value of control height and the maximum count of pixel in a bin. The division cannot really be put outside of the loop because of that.
I recommend also to use a std::array ( need c++11 ) to store the result instead of a raw pointer as there is a risk the caller do not allocate enough space for what will use the function.
#include <algorithm>
#include <array>
void calculateHistogram(const unsigned char* yuv420sp, const int yuvWidth, const int yuvHeight, const int histogramControlHeight, std::array<int, 256> &outHistogramData ) {
outHistogramData.fill(0);
std::for_each( yuv420sp, yuv420sp + yuvWidth * yuvHeight, [&](int e) {
outHistogramData[e]++;
} );
int maxCountInBins = * std::max_element( begin(outHistogramData), end(outHistogramData) );
for( int &bin : outHistogramData )
bin = bin * histogramControlHeight / maxCountInBins;
}
If the maximum brightness of the image maxBrightness is zero, your calculation of maxNormalized becomes a division by zero. I suspect this is where your problem is.
Without better understanding what normalization conditions you are trying to establish, I am not sure what alternative to suggest to you right now.

Fastest way to extract individual pixel data?

I have to get information about the scalar value of a lot of pixels on a gray-scale image using OpenCV. It will be traversing hundreds of thousands of pixels so I need the fastest possible method. Every other source I've found online has been very cryptic and hard to understand. Is there a simple line of code that should just hand a simple integer value representing the scalar value of the first channel (brightness) of the image?
for (int row=0;row<image.height;row++) {
unsigned char *data = image.ptr(row);
for (int col=0;col<image.width;col++) {
// then use *data for the pixel value, assuming you know the order, RGB etc
// Note 'rgb' is actually stored B,G,R
blue= *data++;
green = *data++;
red = *data++;
}
}
You need to get the data pointer on each new row because opencv will pad the data to 32bit boundary at the start of each row
With regards to Martin's post, you can actually check if the memory is allocated continuously using the isContinuous() method in OpenCV's Mat object. The following is a common idiom for ensuring the outer loop only loops once if possible:
#include <opencv2/core/core.hpp>
using namespace cv;
int main(void)
{
Mat img = imread("test.jpg");
int rows = img.rows;
int cols = img.cols;
if (img.isContinuous())
{
cols = rows * cols; // Loop over all pixels as 1D array.
rows = 1;
}
for (int i = 0; i < rows; i++)
{
Vec3b *ptr = img.ptr<Vec3b>(i);
for (int j = 0; j < cols; j++)
{
Vec3b pixel = ptr[j];
}
}
return 0;
}