conversion between c++ class and OpenCV matrix operation - c++

I am trying to convert the following c++ line into OpenCV matrix operation (which is also c++):
double myCode::calculate ( int i, int au )
{
double k;
for ( int j = 0; i < N; i ++ );
{
k += fabs(data[i][j] - means[au][j]);
}
}
I want to define "data" and "means" as openCV matrix type, like:
cv::Mat data ( NUMBER_OF_OBSERVATIONS, N, CV_8UC3 );
cv::Mat means = cv::Mat.zeros ( 5, N, CV_8UC3 );
then repeat the above class for this cvMat type "data" and "means". How can I do that? Especially I don't know how to do the line:
k += fabs(data[i][j] - means[au][j]);
Thanks a lot.

You can simply write
double myCode::calculate ( int i, int au )
{
cv::Scalar res = sum(avg(data(RowRange(i)) - means(RowRange(au))));
return res[0] + res[1] + res[2]; // sum all the channels together
}
Note that RowRange() is not actually the correct syntax - look in OpenCV docs for the proper usage of Range(), but that's the idea.

A simple way to access pixels in OpenCV Mat objects is with the at() operator.
If your data type were 1-channel unsigned char (CV_8UC1), you could simply do this:
k += fabs(data.at<uchar>(i,j) - means.at<uchar>(i,j)); //works for CV_8UC1 type
However, you have 3 channels (R, G, B), dictated by the C3 in your CV_8UC3 datatype. So, here's how do your k += fabs(...) on each channel individually:
//for CV_8UC3 type
k += fabs(data.at<cv::Vec3b>(i,j)[0] - means.at<cv::Vec3b>(i,j)[0]); // Blue Channel
k += fabs(data.at<cv::Vec3b>(i,j)[1] - means.at<cv::Vec3b>(i,j)[1]); // Green Channel
k += fabs(data.at<cv::Vec3b>(i,j)[2] - means.at<cv::Vec3b>(i,j)[2]); // Red Channel
This post offers further explanation about pixel access.

Related

How to convert image storage order from channel-height-width to height-width-channel?

I would like to know how to convert an image stored as a 1D std::vector<float> from CHW format (Channel, Height, Width) to HWC format (Height, Width, Channel) in C++. The format change is needed due to requirements of a neural network.
I used OpenCV to read and show the image as below:
cv::namedWindow("Screenshot", cv::WINDOW_AUTOSIZE );
cv::imshow("Screenshot", rgbImage);
Then I converted the cv::Mat rgbImage to a 1D std::vector<float> in format CHW:
size_t channels = 3;
std::vector<float> data(channels*ROS_IMAGE_HEIGHT*ROS_IMAGE_WIDTH);
for(size_t j=0; j<ROS_IMAGE_HEIGHT; j++){
for(size_t k=0; k<ROS_IMAGE_WIDTH; k++){
cv::Vec3b intensity = rgbImage.at<cv::Vec3b>(j, k);
for(size_t i=0; i<channels; i++){
data[i*ROS_IMAGE_HEIGHT*ROS_IMAGE_WIDTH + j*ROS_IMAGE_HEIGHT + k] = (float) intensity[i];
}
}
}
Now I want to convert the format of std::vector<float> data to HWC. How can I do this?
I found some description of the "CHW" and "HWC" formats here.
If the storage order is HWC, it means that
Each sample is stored as a column-major matrix (height, width) of float[numChannels] (r00, g00, b00, r10, g10, b10, r01, g01, b01, r11, g11, b11).
Thus a pixel (x, y, c) is found using
xStride = channels;
yStride = channels * width;
cStride = 1;
data[x*xStride + y*yStride + c*cStride]
If the storage order is CHW, it means that each channel is a different plane. A pixel (x, y, c) is found using
xStride = 1;
yStride = width;
cStride = width * height;
data[x*xStride + y*yStride + c*cStride]
Note that in the code in the question, data[i*ROS_IMAGE_HEIGHT*ROS_IMAGE_WIDTH + j*ROS_IMAGE_HEIGHT + k] is incorrect, j is the y-coordinate and should be multiplied by ROS_IMAGE_WIDTH.
The code in the question can be modified to yield a std::vector in the HWC format by replacing the line in the innermost loop by:
data[i + j*ROS_IMAGE_WIDTH*channels + k*channels] = (float) intensity[i];

fftw + opencv inconsistent output

I recently tried to implement an FFT function for Opencv's Mat.
I inspired my implementation mainly from FFTW's code samples and from :
FFTW-OpenCV
I payed close attention to adapt the size of the input image in order to fasten the processing.
It seems that I did something wrong because the output is always a black image.
Here is my implementation:
void fft2_32f(const cv::Mat1f& _src, cv::Mat2f& dst)
{
cv::Mat2f src;
const int rows = cv::getOptimalDFTSize(_src.rows);
const int cols = cv::getOptimalDFTSize(_src.cols);
// const int total = cv::alignSize(rows*cols,steps);
if(_src.isContinuous() && _src.rows == rows && _src.cols == cols)
{
src = cv::Mat2f::zeros(src.size());
dst = cv::Mat2f::zeros(src.size());
// 1) copy the source into a complex matrix (the imaginary component is set to 0).
cblas_scopy(src.total(), _src.ptr<float>(), 1, src.ptr<float>(), 2);
// 2) prepare and apply the transform.
fftwf_complex* ptr_in = reinterpret_cast<fftwf_complex*>(src.ptr<float>());
fftwf_complex* ptr_out = reinterpret_cast<fftwf_complex*>(dst.ptr<float>());
// fftwf_plan fft = fftwf_plan_dft_1d(src.total(), ptr_in, ptr_out, FFTW_FORWARD, FFTW_ESTIMATE);
fftwf_plan fft = fftwf_plan_dft_2d(src.rows, src.cols, ptr_in, ptr_out, FFTW_FORWARD, FFTW_ESTIMATE);
fftwf_execute(fft);
fftwf_destroy_plan(fft);
// 3) normalize
cblas_saxpy(dst.rows * dst.step1(), 1.f/dst.total(), dst.ptr<float>(), 1, dst.ptr<float>(), 1);
}
else
{
src = cv::Mat2f::zeros(rows, cols);
dst = cv::Mat2f::zeros(rows, cols);
// 1) copy the source into a complex matrix (the imaginary component is set to 0).
support::parallel_for(cv::Range(0, _src.rows), [&src, &_src](const cv::Range& range)->void
{
for(int r=range.start; r<range.end; r++)
{
int c=0;
const float* it_src = _src[r];
float* it_dst = src.ptr<float>(r);
#if CV_ENABLE_UNROLLED
for(;c<=_src.cols-4; c+=4, it_src+=4, it_dst+=8)
{
*it_dst = *it_src;
*(it_dst+2) = *(it_src+1);
*(it_dst+4) = *(it_src+2);
*(it_dst+6) = *(it_src+3);
}
#endif
for(; c<_src.cols; c++, it_src++, it_dst+=2)
*it_dst = *it_src;
}
}, 0x80);
// 2) prepare and apply the transform.
fftwf_complex* ptr_in = reinterpret_cast<fftwf_complex*>(src.ptr<float>());
fftwf_complex* ptr_out = reinterpret_cast<fftwf_complex*>(dst.ptr<float>());
fftwf_plan fft = fftwf_plan_dft_2d(src.rows, src.cols, ptr_in, ptr_out, FFTW_FORWARD, FFTW_ESTIMATE);
fftwf_execute(fft);
fftwf_destroy_plan(fft);
double min(0.);
double max(0.);
// 3) normalize
cblas_saxpy(dst.rows * dst.step1(), 1.f/dst.total(), dst.ptr<float>(), 1, dst.ptr<float>(), 1);
}
}
Note:
The parallel_for implementation is inspired by: How to use lambda as a parameter to parallel_for_
Thanks in advance for any help.
I figure out my issue.
This function written as is does work perfectly (at least for the purpose I made it for).
My issue was that :
cv::Mat dst = cv::Mat::zeros(src.size(), CV_32FC2);
cv::Mat1f srcw = src;
cv::Mat1f dstw = dst;
fft2_32f(srcw, dstw); // realocate dstw to the optimal size for receive the output depending on the size of srcw. ... so the dstw is reallocate but not dst.
dst.copyTo(_outputVariable);
In that case the correct information is store in dstw but not in dst because of the reallocation inside the function.
So when I try to visualize my data I had a black image because of that.
The proper call use to be:
cv::Mat dst;
cv::Mat1f srcw = src;
cv::Mat1f dstw;
fft2_32f(srcw, dstw); // realocate dstw to the optimal size for receive the output depending on the size of srcw. ... so the dstw is reallocate but not dst.
dst = dstw;
dst.copyTo(_outputVariable); // or dstw.copyTo(_outputVariable);
With that code I got the proper output.
Note depending on the application a roi (take a look to the operator()(const cv::Rect&) of OpenCV's Mat container) corresponding to the size of the input may be usefull in order to preserve the dimensions.
Thank you for your help :).
Can someone help me to mark this topic as close ? please.

adding float openCV3.0

I actually have a problem on openCV3.0.
I used 12 gabor filters(12 differents orientation) on 1 image and stocked them.
Now I want to add all those images and then divide by 12 each value to obtain the mean of the 12 filters.
Because those image are RGB, I have to work on each channel separatly.
The problem is : when I add all the values, I obtain values > 12 while all the values are between 0 and 1.
The part of the code bugged :
for (i = 0; i < gaborV.size(); ++i) { //gaborV contain the 12 gabor filters
std::vector<cv::Mat> vec_split; //I split because of the 3 channels
cv::split(gaborV[i], vec_split);
for (int k = 0; k < imgCol.rows; ++k) {
for (int j = 0; j < imgCol.cols; ++j) {
if (k == 1 && j == 1)
std::cout << mat_X.at<float>(k, j) << " " << vec_split[0].at<float>(k, j) << std::endl;
mat_X.at<float>(k, j) += vec_split[0].at<float>(k, j);
mat_Y.at<float>(k, j) += vec_split[1].at<float>(k, j);
mat_Z.at<float>(k, j) += vec_split[2].at<float>(k, j);
}
}
}
and mat_X, mat_Y and mat_Z are created as follow :
mat_X = mat_Y = mat_Z = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
As I said, all values in vec_split are between 0 and 1, but when I'm out of the loop, mat_X, mat_Y and mat_Z contain values > 12..
The output of the cout I used :
0 0.507358
1.54751 0.496143
3.00963 0.528832
4.53887 0.465426
... and at the end I have 15.9459
And i don't understand since 0 + 0.507358 != 1.54751; 1.54751 + 0.496143 != 3.00963 ...
Do someone understand the problem?
Thanks for all!
I think the problem is here:
mat_X = mat_Y = mat_Z = cv::Mat(cvSize(imgColNormalize.cols,
imgColNormalize.rows), CV_32FC1, cvScalar(0.));
The way you initialise these arrays results in all three cv::Mat objects referencing the same data. Only one Mat is created and so your code increments the values in this array three times.
For info, OpenCV uses a reference counting mechanism with cv::Mat and the assignment operator simply creates a new reference to existing data. If you wanted to create a genuine deep-copy of a cv::Mat, you would need to use cv::Mat::clone().
So, instead, initialise like so:
mat_X = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
mat_Y = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
mat_Z = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
An excerpt from the documentation copied below for posterity:

How to use Matlab's 512 element lookup table array in OpenCV?

I am designing morphological operations in OpenCV. I am trying to mimic the functions remove and bridge in Matlab's bwmorph. To do this I referred to the function definition of bwmorph.m, there I obtained the Look up table arrays for remove and bridge.
After that step the procedure is same for both Matlab and OpenCV.
lut(img,lutarray,img)
Problem is that Matlab uses a 512 element (9bit) look up table scheme while OpenCV uses a 256 element (8bit) look up scheme, how do I use the Matlab lutarray in OpenCV?
After doing some research I came across this post.
What does the person mean when they're saying that they "split" the image from 0-512 and then into two parts?
Is the above method even correct? Are there any alternates to doing this?
bwlookup(bw,lut)
http://se.mathworks.com/help/images/ref/bwlookup.html
or internally, applylut both perform a 2-by-2 or 3-by-3 neighborhood operation on a binary (black & white) image, whereas OpenCV's cv::LUT performs a per pixel gray level transform (closely related to intlut in MATLAB). An example of latter is performing a gamma correction on gray level image.
//! transforms array of numbers using a lookup table: dst(i)=lut(src(i))
CV_EXPORTS_W void LUT(InputArray src, InputArray lut, OutputArray dst,
int interpolation=0);
To my knowledge, there is no neighborhood bwlookup implementation in OpenCV. However, following the description of MATLAB's bwlookup, you can write it yourself.
// performs 3-by-3 lookup on binary image
void bwlookup(
const cv::Mat & in,
cv::Mat & out,
const cv::Mat & lut,
int bordertype=cv::BORDER_CONSTANT,
cv::Scalar px = cv::Scalar(0) )
{
if ( in.type() != CV_8UC1 )
CV_Error(CV_StsError, "er");
if ( lut.type() != CV_8UC1 || lut.rows*lut.cols!=512 || !lut.isContinuous() )
CV_Error(CV_StsError, "lut size != 512" );
if ( out.type() != in.type() || out.size() != in.size() )
out = cv::Mat( in.size(), in.type() );
const unsigned char * _lut = lut.data;
cv::Mat t;
cv::copyMakeBorder( in,t,1,1,1,1,bordertype,px);
const int rows=in.rows+1;
const int cols=in.cols+1;
for ( int y=1;y<rows;++y)
{
for ( int x=1;x<cols;++x)
{
int L = 0;
const int jmax=y+1;
#if 0 // row-major order
for ( int j=y-1, k=1; j<=jmax; ++j, k<<=3 )
{
const unsigned char * p = t.ptr<unsigned char>(j) + x-1;
for ( unsigned int u=0;u<3;++u )
{
if ( p[u] )
L += (k<<u);
#else // column-major order (MATLAB)
for ( int j=y-1, k=1; j<=jmax; ++j, k<<=1 )
{
const unsigned char * p = t.ptr<unsigned char>(j) + x-1;
for ( unsigned int u=0;u<3;++u )
{
if ( p[u] )
L += (k<<3*u);
#endif
}
}
out.at<unsigned char>(y-1,x-1)=_lut[ L ];
}
}
}
I tested it against remove and bridge so should work. Hope that helps.
Edit: After checking against a random lookup table,
lut = uint8( rand(512,1)>0.5 ); % #MATLAB
B = bwlookup( A, lut );
I flipped the order the indices appear in the lookup table (doesn't matter if the operation is symmetric).

Accessing certain pixel RGB value in openCV

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}