OpenCV Mat data member access - c++

I've seen a lot of OpenCV code which accesses the data member of a cv::Mat directly. cv::Mat stores the pointer to the data in a unsigned char* data member. Access to the data member looks like:
cv::Mat matUC(3,3,CV_8U)
int rowIdx = 1;
int colIdx = 1;
unsigned char val = matUC.data[ rowIdx * matUC.cols + colIdx]
I'm wondering if this works for a cv::Mat with pixel type other than unsigned char.
cv::Mat matF(3,3,CV_32F)
int rowIdx = 1;
int colIdx = 1;
float val = matF.data[ rowIdx * matF.cols + colIdx];
My understanding would be that a typecast is needed to access the elements correctly. Something like:
float val = ((float*)matF.data)[ rowIdx * matF.cols + colIdx];
I've seen a lot of code which doesn't use a typecast. So my question is: Is the typecast mandatory to access the correct element?

Mat data is an uchar*. If you have a, say, float matrix CV_32FC1, you need to access data as float.
You can do in different ways, not necessarily using casting:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
cv::Mat matF(3, 3, CV_32F);
randu(matF, Scalar(0), Scalar(10));
int rowIdx = 1;
int colIdx = 1;
// 1
float f1 = matF.at<float>(rowIdx, colIdx);
// 2
float* fData2 = (float*)matF.data;
float f2 = fData2[rowIdx*matF.step1() + colIdx];
// 3
float* fData3 = matF.ptr<float>(0);
float f3 = fData3[rowIdx*matF.step1() + colIdx];
// 4
float* fData4 = matF.ptr<float>(rowIdx);
float f4 = fData4[colIdx];
// 5
Mat1f mm(matF); // Or directly create like: Mat1f mm(3, 3);
float f5 = mm(rowIdx, colIdx);
// f1 == f2 == f3 == f4 == f5
return 0;
}
Notes
it's better to use step1() instead of cols when accessing directly data through pointers, since the image may not be continuous. Check here for more details.
The snippet is adapted from this my other answer

Related

ITK Filter on an array of floats

I want to use ITK to Filter a 3D volume that is in a contiguous float array. I want to apply the Curvature Flow Filter as shown in this example.
I don't know how to interface my float array with ITK and then to get a float array out on the other side. Will it modify the data in place? How does it work?
Here is some code to demonstrate what I am trying to do.
#include "itkCurvatureFlowImageFilter.h"
int main(int argc, char *argv[])
{
float *my_array; // I have an array I have generated elsewhere
// This is a 3D volume in order XYZ with these dimensions along each axis
size_t num_x = 125;
size_t num_y = 250;
size_t num_z = 125;
size_t num_elements = num_x * num_y * num_z;
float voxel_size = 8e-3; // 8 mm voxels
constexpr unsigned int Dimension = 3;
// convert to an itk image
// itkimage = ???
using InputPixelType = float;
using InputImageType = itk::Image<InputPixelType, Dimension>;
const int numberOfIterations = 10;
const InputPixelType timeStep = 0.05;
using FilterType = itk::CurvatureFlowImageFilter<InputImageType, InputImageType>;
FilterType::Pointer filter = FilterType::New();
filter->SetInput(itkimage);
filter->SetNumberOfIterations(numberOfIterations);
filter->SetTimeStep(timeStep);
// now I want to put the filter result back in th array
filter->GetOutput();
// Is the data modified in place? How can I get a regular float array out??
return EXIT_SUCCESS;
}
itk::ImportImageFilter is used for the purpose of representing some array as an image. Look at the examples linked in the documentation.
As CurvatureFlowImageFilter derives from InPlaceImageFilter, and both your input and output pixel type are same (float), it can run in-place. But you still need to request that:
itkimage = import...;
...
filter->SetInput(itkimage );
// set other parameters
filter->SetInPlace(true);
filter->Update();
itkimage = filter->GetOutput();

Halide Jit compilation

Im trying to compile my halide program to jit to use it later in code few times on different images. But i think i making something wrong, can anyone correct me?
First I create halide function to run:
void m_gammaFunctionTMOGenerate()
{
Halide::ImageParam img(Halide::type_of<float>(), 3);
img.set_stride(0, 4);
img.set_stride(2, 1);
Halide::Var x, y, c;
Halide::Param<float> key, sat, clampMax, clampMin;
Halide::Param<bool> cS;
Halide::Func gamma;
// algorytm
//img.width() , img.height();
if (cS.get())
{
float k1 = 1.6774;
float k2 = 0.9925;
sat.set((1 + k1) * pow(key.get(), k2) / (1 + k1 * pow(key.get(), k2)));
}
Halide::Expr luminance = img(x, y, 0) * 0.072186f + img(x, y, 1) * 0.715158f + img(x, y, 2) * 0.212656f;
Halide::Expr ldr_lum = (luminance - clampMin) / (clampMax - clampMin);
Halide::clamp(ldr_lum, 0.f, 1.f);
ldr_lum = Halide::pow(ldr_lum, key);
Halide::Expr imLum = img(x, y, c) / luminance;
imLum = Halide::pow(imLum, sat) * ldr_lum;
Halide::clamp(imLum, 0.f, 1.f);
gamma(x, y, c) = imLum;
// rozkład
gamma.vectorize(x, 16).parallel(y);
// kompilacja
auto & obuff = gamma.output_buffer();
obuff.set_stride(0, 4);
obuff.set_stride(2, 1);
obuff.set_extent(2, 3);
std::vector<Halide::Argument> arguments = { img, key, sat, clampMax, clampMin, cS };
m_gammaFunction = (gammafunction)(gamma.compile_jit());
}
store it in pointer:
typedef int(*gammafunction)(buffer_t*, float, float, float, float, bool, buffer_t*);
gammafunction m_gammaFunction;
then i try to run it:
buffer_t output_buf = { 0 };
//// The host pointers point to the start of the image data:
buffer_t buf = { 0 };
buf.host = (uint8_t *)data; // Might also need const_cast
float * output = new float[width * height * 4];
output_buf.host = (uint8_t*)(output);
// // If the buffer doesn't start at (0, 0), then assign mins
output_buf.extent[0] = buf.extent[0] = width; // In elements, not bytes
output_buf.extent[1] = buf.extent[1] = height; // In elements, not bytes
output_buf.extent[2] = buf.extent[2] = 4; // Assuming RGBA
// // No need to assign additional extents as they were init'ed to zero above
output_buf.stride[0] = buf.stride[0] = 4; // RGBA interleaved
output_buf.stride[1] = buf.stride[1] = width * 4; // Assuming no line padding
output_buf.stride[2] = buf.stride[2] = 1; // Channel interleaved
output_buf.elem_size = buf.elem_size = sizeof(float);
// Run the pipeline
int error = m_photoFunction(&buf, params[0], &output_buf);
But it doesn't work...
Error:
Exception thrown at 0x000002974F552DE0 in Viewer.exe: 0xC0000005: Access violation executing location 0x000002974F552DE0.
If there is a handler for this exception, the program may be safely continued.
Edit:
Here is my code for running function:
buffer_t output_buf = { 0 };
//// The host pointers point to the start of the image data:
buffer_t buf = { 0 };
buf.host = (uint8_t *)data; // Might also need const_cast
float * output = new float[width * height * 4];
output_buf.host = (uint8_t*)(output);
// // If the buffer doesn't start at (0, 0), then assign mins
output_buf.extent[0] = buf.extent[0] = width; // In elements, not bytes
output_buf.extent[1] = buf.extent[1] = height; // In elements, not bytes
output_buf.extent[2] = buf.extent[2] = 3; // Assuming RGBA
// // No need to assign additional extents as they were init'ed to zero above
output_buf.stride[0] = buf.stride[0] = 4; // RGBA interleaved
output_buf.stride[1] = buf.stride[1] = width * 4; // Assuming no line padding
output_buf.stride[2] = buf.stride[2] = 1; // Channel interleaved
output_buf.elem_size = buf.elem_size = sizeof(float);
// Run the pipeline
int error = m_gammaFunction(&buf, params[0], params[1], params[2], params[3], params[4] > 0.5 ? true : false, &output_buf);
if (error) {
printf("Halide returned an error: %d\n", error);
return -1;
}
memcpy(output, data, size * sizeof(float));
can anyone help me with it?
Edit:
Thanks to #KhouriGiordano I found out what I was doing wrong. Indeed I switched from AOT compiling to this code. So now my code looks like that:
class GammaOperator
{
public:
GammaOperator();
int realize(buffer_t * input, float params[], buffer_t * output, int width);
private:
HalideFloat m_key;
HalideFloat m_sat;
HalideFloat m_clampMax;
HalideFloat m_clampMin;
HalideBool m_cS;
Halide::ImageParam m_img;
Halide::Var x, y, c;
Halide::Func m_gamma;
};
GammaOperator::GammaOperator()
: m_img( Halide::type_of<float>(), 3)
{
Halide::Expr w = (1.f + 1.6774f) * pow(m_key.get(), 0.9925f) / (1.f + 1.6774f * pow(m_key.get(), 0.9925f));
Halide::Expr sat = Halide::select(m_cS, m_sat, w);
Halide::Expr luminance = m_img(x, y, 0) * 0.072186f + m_img(x, y, 1) * 0.715158f + m_img(x, y, 2) * 0.212656f;
Halide::Expr ldr_lum = (luminance - m_clampMin) / (m_clampMax - m_clampMin);
ldr_lum = Halide::clamp(ldr_lum, 0.f, 1.f);
ldr_lum = Halide::pow(ldr_lum, m_key);
Halide::Expr imLum = m_img(x, y, c) / luminance;
imLum = Halide::pow(imLum, sat) * ldr_lum;
imLum = Halide::clamp(imLum, 0.f, 1.f);
m_gamma(x, y, c) = imLum;
}
int GammaOperator::realize(buffer_t * input, float params[], buffer_t * output, int width)
{
m_img.set(Halide::Buffer(Halide::type_of<float>(), input));
m_img.set_stride(0, 4);
m_img.set_stride(1, width * 4);
m_img.set_stride(2, 4);
// algorytm
m_gamma.vectorize(x, 16).parallel(y);
//params[0], params[1], params[2], params[3], params[4] > 0.5 ? true : false
//{ img, key, sat, clampMax, clampMin, cS };
m_key.set(params[0]);
m_sat.set(params[1]);
m_clampMax.set(params[2]);
m_clampMin.set(params[3]);
m_cS.set(params[4] > 0.5f ? true : false);
//// kompilacja
m_gamma.realize(Halide::Buffer(Halide::type_of<float>(), output));
return 0;
}
and i use it like that:
buffer_t output_buf = { 0 };
//// The host pointers point to the start of the image data:
buffer_t buf = { 0 };
buf.host = (uint8_t *)data; // Might also need const_cast
float * output = new float[width * height * 4];
output_buf.host = (uint8_t*)(output);
// // If the buffer doesn't start at (0, 0), then assign mins
output_buf.extent[0] = buf.extent[0] = width; // In elements, not bytes
output_buf.extent[1] = buf.extent[1] = height; // In elements, not bytes
output_buf.extent[2] = buf.extent[2] = 4; // Assuming RGBA
// // No need to assign additional extents as they were init'ed to zero above
output_buf.stride[0] = buf.stride[0] = 4; // RGBA interleaved
output_buf.stride[1] = buf.stride[1] = width * 4; // Assuming no line padding
output_buf.stride[2] = buf.stride[2] = 1; // Channel interleaved
output_buf.elem_size = buf.elem_size = sizeof(float);
// Run the pipeline
int error = s_gamma->realize(&buf, params, &output_buf, width);
but it is still crashing on m_gamma.realize function with info in console:
Error: Constraint violated: f0.stride.0 (4) == 1 (1)
By using Halide::Param::get(), you are extracting the (default of 0) value from the Param object at the time you call get(). If you want to use the parameter value given at the time you call the generated function, just use it without calling get and it should be implicitly converted to an Expr.
Since Param is not convertible to a boolean, the Halide way of doing an if is Halide::select().
You aren't using the clamped return value of Halide::clamp().
I don't see cS being used by the Halide code, only the C code.
Now to your JIT problem. It looks like you started doing AOT compilation and switched to JIT.
You make a std::vector<Halide::Argument> but don't pass it anywhere. How can Halide know what Param you want to use? It looks at the Func and finds references to ImageParam and Param objects.
How can you know what order it expects the Param? You have no control over this. I was able to dump the bitcode by defining HL_GENBITCODE=1 and then view that with llvm-dis to see your function:
int gamma
( buffer_t *img
, float clampMax
, float key
, float clampMin
, float sat
, void *user_context
, buffer_t *result
);
Use gamma.realize(Halide::Buffer(Halide::type_of<float>(), &output_buf)) instead of using gamma.compile_jit() and trying to call the generated function properly.
For one time use:
Use Image instead of ImageParam.
Use Expr instead of Param.
For repeated use with a single JIT compile:
Keep the ImageParam and Param around and set them before realizing the Func.

How do I pass an OpenCV Mat into a C++ Tensorflow graph?

In Tensorflow C++ I can load an image file into the graph using
tensorflow::Node* file_reader = tensorflow::ops::ReadFile(tensorflow::ops::Const(IMAGE_FILE_NAME, b.opts()),b.opts().WithName(input_name));
tensorflow::Node* image_reader = tensorflow::ops::DecodePng(file_reader, b.opts().WithAttr("channels", 3).WithName("png_reader"));
tensorflow::Node* float_caster = tensorflow::ops::Cast(image_reader, tensorflow::DT_FLOAT, b.opts().WithName("float_caster"));
tensorflow::Node* dims_expander = tensorflow::ops::ExpandDims(float_caster, tensorflow::ops::Const(0, b.opts()), b.opts());
tensorflow::Node* resized = tensorflow::ops::ResizeBilinear(dims_expander, tensorflow::ops::Const({input_height, input_width},b.opts().WithName("size")),b.opts());
For an embedded application I would like to instead pass an OpenCV Mat into this graph.
How would I convert the Mat to a tensor that could be used as input to tensorflow::ops::Cast or tensorflow::ops::ExpandDims?
It's not directly from a CvMat, but you can see an example of how to initialize a Tensor from an in-memory array in the TensorFlow Android example:
https://github.com/tensorflow/tensorflow/blob/0.6.0/tensorflow/examples/android/jni/tensorflow_jni.cc#L173
You would start off by creating a new tensorflow::Tensor object, with something like this (all code untested):
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT,
tensorflow::TensorShape({1, height, width, depth}));
This creates a Tensor object with float values, with a batch size of 1, and a size of widthxheight, and with depth channels. For example a 128 wide by 64 high image with 3 channels would pass in a shape of {1, 64, 128, 3}. The batch size is just used when you need to pass in multiple images in a single call, and for simple uses you can leave it as 1.
Then you would get the underlying array behind the tensor using a line like this:
auto input_tensor_mapped = input_tensor.tensor<float, 4>();
The input_tensor_mapped object is an interface to the data in your newly-created tensor, and you can then copy your own data into it. Here I'm assuming you've set source_data as a pointer to your source data, for example:
const float* source_data = some_structure.imageData;
You can then loop through your data and copy it over:
for (int y = 0; y < height; ++y) {
const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x) {
const float* source_pixel = source_row + (x * depth);
for (int c = 0; c < depth; ++c) {
const float* source_value = source_pixel + c;
input_tensor_mapped(0, y, x, c) = *source_value;
}
}
}
There are obvious opportunities to optimize this naive approach, and I don't have sample code on hand to show how to deal with the OpenCV side of getting the source data, but hopefully this is helpful to get you started.
Here is complete example to read and feed:
Mat image;
image = imread("flowers.jpg", CV_LOAD_IMAGE_COLOR);
cv::resize(image, image, cv::Size(input_height, input_width), 0, 0, CV_INTER_CUBIC);
int depth = 3;
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT,
tensorflow::TensorShape({1, image.rows, image.cols, depth}));
for (int y = 0; y < image.rows; y++) {
for (int x = 0; x < image.cols; x++) {
Vec3b pixel = image.at<Vec3b>(y, x);
input_tensor_mapped(0, y, x, 0) = pixel.val[2]; //R
input_tensor_mapped(0, y, x, 1) = pixel.val[1]; //G
input_tensor_mapped(0, y, x, 2) = pixel.val[0]; //B
}
}
auto result = Sub(root.WithOpName("subtract_mean"), input_tensor, {input_mean});
ClientSession session(root);
TF_CHECK_OK(session.Run({result}, out_tensors));
I had tried to run inception model on the opencv Mat file and following code worked for me https://gist.github.com/kyrs/9adf86366e9e4f04addb. Although there are some issue with integration of opencv and tensorflow. Code worked without any issue for .png files but failed to load .jpg and .jpeg. You can follow this for more info https://github.com/tensorflow/tensorflow/issues/1924
Tensor convertMatToTensor(Mat &input)
{
int height = input.rows;
int width = input.cols;
int depth = input.channels();
Tensor imgTensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({height, width, depth}));
float* p = imgTensor.flat<float>().data();
Mat outputImg(height, width, CV_32FC3, p);
input.convertTo(outputImg, CV_32FC3);
return imgTensor;
}

Get data from mat C openCV

I have some C++ openCV code.
Something like
void foo(...., Mat mx){
...
x_d = mx.at<float>(x,y);
...
}
I try rewrite this part in C. I read docs and they say that data is union inside CvMap structure. But i got different error messages while try to get it. Here they are:
void foo(...., CvMat *mx){
...
x_d = *(((float*)mx->data) + y * width + x); // cannot convert to a pointer type
x_d = *(mx->data + y * width + x); // invalid operands to binary + (have 'union <anonymous>' and 'int')
...
}
So i have two questions.
How I can get that float from structure this code?
Is this the data that I need?
You can use Get?D functions, or CV_MAT_ELEM macro.
See the example below:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
CvMat* m = cvCreateMat(2, 3, CV_32FC1);
cvSet(m, cvScalar(1.0f));
cvSet2D(m, 0, 1, cvScalar(2.0f));
// m
// 1.0 2.0 1.0
// 1.0 1.0 1.0
// Using cvGet2d
CvScalar v00 = cvGet2D(m, 0, 0);
float f00 = v00.val[0];
// f00 = 1.0
CvScalar v01 = cvGet2D(m, 0, 1);
float f01 = v01.val[0];
// f00 = 2.0
// Using CV_MAT_ELEM
float g00 = CV_MAT_ELEM(*m, float, 0, 0);
// g00 = 1.0
float g01 = CV_MAT_ELEM(*m, float, 0, 1);
// g01 = 2.0
return 0;
}
The first instruction seems to be almost correct but the error is saying that you try treat the result as pointer. Obviously it's not a pointer, but a number. Probably float. Try this:
x_d = (((float*)mx->data) + y * width + x);

Accessing certain pixel RGB value in openCV

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}