edit: In trying to give a straight forward example of the problem it appears I left out what was causing the real issue. I have modified the example to illustrate the problem.
I am trying to use opencv to perform operations on a cv::Mat that is composed of external data.
Consider this example:
unsigned char *extern_data = new unsigned char[1280*720*3];
cv::Mat mat = cv::Mat(1280, 720, CV_8UC3, extern_data); //Create cv::Mat external
//Edit - Added cv::imdecode
mat = cv::imdecode(mat,1);
//In real implementation it would be mat = cv::imdecode(image,'1')
// where image is a cv::Mat of an image stored in a mmap buffer
mat.data[100] = 99;
std::cout << "External array: " << static_cast<int>(extern_data[100]) << std::endl;
std::cout << "cv::Mat array: " << static_cast<int>(mat.data[100]) << std::endl;
The result of this is:
> External array: 0
> cv::Mat array: 100
It is clear this external array is not being modified, therefore new memory is being allocated for the cv::Mat array. From my understanding this was not suppose to happen! This should have caused no copy operation, and mat.data should be a pointer to extern_data[0].
What am I misunderstanding?
So far the way I have got my program to work is to use std::copy. I am still wondering if there is a way to assign the result of cv::imdecode() directly to the external data.
Currently I am using
unsigned char *extern_data = new unsigned char[1280*720*3];
cv::Mat mat = cv::Mat(1280, 720, CV_8UC3, extern_data); //Create cv::Mat external
mat = cv::imdecode(mat,1);
std::copy(mat.data, mat.data + 1280*720*3, extern_data);
I just wish i could figure out how to assign the result of cv::imdecode() directly to extern_data without the additional std::copy line!
Related
So I'm trying to access jpg image data that's stored in the computer's memory, but I'm only able to access the starting address of the data and the size of the data. The pointer to the beginning of the data is an uint8_t * and the size of the data is an uint32_t.
Using fwrite, I can write the data to a jpg file and display it properly, so I know the data is correct and exists. How can I directly store the jpg image data in a variable? Ultimately, I want to store it in an opencv Mat too.
I'm not sure what code to show, so if you want to see a specific part of it, just ask.
you know the resolution of the image? if so
to store it in an opencv Mat you can do
cv::Mat buf = cv::Mat(height, width, CV_8U, buff_ptr);
cv::Mat img = cv::imdecode(buf, CV_LOAD_IMAGE_COLOR);
How about this:
uint8_t *JPEG;
JPEG = malloc(sizeof(uint8_t) * size_var);
if(JPEG) {
memcpy(JPEG, your_pointer, size_var);
}
However, you do already have the data in a variable since you already have a pointer to it. :)
If you wanted to be really fancy you could build a struct that correctly formatted the header of a JPEG and then typecast the raw memory into the struct to better manipulate it.
If you have a variable of type uint8_t * that contains a pointer to it, you can access it just like an array.
uint8_t* my_pointer;
int my_len;
for (int i = 0; i < my_len; ++i)
cout << "The value of byte " << i
<< " of the data is " << my_pointer[i] << endl;
I am novice in OpenCV. Recently, I have troubles finding OpenCV functions to convert from Mat to Array. I researched with .ptr and .at methods available in OpenCV APIs, but I could not get proper data. I would like to have direct conversion from Mat to Array(if available, if not to Vector). I need OpenCV functions because the code has to be undergo high level synthesis in Vivado HLS. Please help.
If the memory of the Mat mat is continuous (all its data is continuous), you can directly get its data to a 1D array:
std::vector<uchar> array(mat.rows*mat.cols*mat.channels());
if (mat.isContinuous())
array = mat.data;
Otherwise, you have to get its data row by row, e.g. to a 2D array:
uchar **array = new uchar*[mat.rows];
for (int i=0; i<mat.rows; ++i)
array[i] = new uchar[mat.cols*mat.channels()];
for (int i=0; i<mat.rows; ++i)
array[i] = mat.ptr<uchar>(i);
UPDATE: It will be easier if you're using std::vector, where you can do like this:
std::vector<uchar> array;
if (mat.isContinuous()) {
// array.assign(mat.datastart, mat.dataend); // <- has problems for sub-matrix like mat = big_mat.row(i)
array.assign(mat.data, mat.data + mat.total()*mat.channels());
} else {
for (int i = 0; i < mat.rows; ++i) {
array.insert(array.end(), mat.ptr<uchar>(i), mat.ptr<uchar>(i)+mat.cols*mat.channels());
}
}
p.s.: For cv::Mats of other types, like CV_32F, you should do like this:
std::vector<float> array;
if (mat.isContinuous()) {
// array.assign((float*)mat.datastart, (float*)mat.dataend); // <- has problems for sub-matrix like mat = big_mat.row(i)
array.assign((float*)mat.data, (float*)mat.data + mat.total()*mat.channels());
} else {
for (int i = 0; i < mat.rows; ++i) {
array.insert(array.end(), mat.ptr<float>(i), mat.ptr<float>(i)+mat.cols*mat.channels());
}
}
UPDATE2: For OpenCV Mat data continuity, it can be summarized as follows:
Matrices created by imread(), clone(), or a constructor will always be continuous.
The only time a matrix will not be continuous is when it borrows data (except the data borrowed is continuous in the big matrix, e.g. 1. single row; 2. multiple rows with full original width) from an existing matrix (i.e. created out of an ROI of a big mat).
Please check out this code snippet for demonstration.
Can be done in two lines :)
Mat to array
uchar * arr = image.isContinuous()? image.data: image.clone().data;
uint length = image.total()*image.channels();
Mat to vector
cv::Mat flat = image.reshape(1, image.total()*image.channels());
std::vector<uchar> vec = image.isContinuous()? flat : flat.clone();
Both work for any general cv::Mat.
Explanation with a working example
cv::Mat image;
image = cv::imread(argv[1], cv::IMREAD_UNCHANGED); // Read the file
cv::namedWindow("cvmat", cv::WINDOW_AUTOSIZE );// Create a window for display.
cv::imshow("cvmat", image ); // Show our image inside it.
// flatten the mat.
uint totalElements = image.total()*image.channels(); // Note: image.total() == rows*cols.
cv::Mat flat = image.reshape(1, totalElements); // 1xN mat of 1 channel, O(1) operation
if(!image.isContinuous()) {
flat = flat.clone(); // O(N),
}
// flat.data is your array pointer
auto * ptr = flat.data; // usually, its uchar*
// You have your array, its length is flat.total() [rows=1, cols=totalElements]
// Converting to vector
std::vector<uchar> vec(flat.data, flat.data + flat.total());
// Testing by reconstruction of cvMat
cv::Mat restored = cv::Mat(image.rows, image.cols, image.type(), ptr); // OR vec.data() instead of ptr
cv::namedWindow("reconstructed", cv::WINDOW_AUTOSIZE);
cv::imshow("reconstructed", restored);
cv::waitKey(0);
Extended explanation:
Mat is stored as a contiguous block of memory, if created using one of its constructors or when copied to another Mat using clone() or similar methods. To convert to an array or vector we need the address of its first block and array/vector length.
Pointer to internal memory block
Mat::data is a public uchar pointer to its memory.
But this memory may not be contiguous. As explained in other answers, we can check if mat.data is pointing to contiguous memory or not using mat.isContinous(). Unless you need extreme efficiency, you can obtain a continuous version of the mat using mat.clone() in O(N) time. (N = number of elements from all channels). However, when dealing images read by cv::imread() we will rarely ever encounter a non-continous mat.
Length of array/vector
Q: Should be row*cols*channels right?
A: Not always. It can be rows*cols*x*y*channels.
Q: Should be equal to mat.total()?
A: True for single channel mat. But not for multi-channel mat
Length of the array/vector is slightly tricky because of poor documentation of OpenCV. We have Mat::size public member which stores only the dimensions of single Mat without channels. For RGB image, Mat.size = [rows, cols] and not [rows, cols, channels]. Mat.total() returns total elements in a single channel of the mat which is equal to product of values in mat.size. For RGB image, total() = rows*cols. Thus, for any general Mat, length of continuous memory block would be mat.total()*mat.channels().
Reconstructing Mat from array/vector
Apart from array/vector we also need the original Mat's mat.size [array like] and mat.type() [int]. Then using one of the constructors that take data's pointer, we can obtain original Mat. The optional step argument is not required because our data pointer points to continuous memory. I used this method to pass Mat as Uint8Array between nodejs and C++. This avoided writing C++ bindings for cv::Mat with node-addon-api.
References:
Create memory continuous Mat
OpenCV Mat data layout
Mat from array
Here is another possible solution assuming matrix have one column( you can reshape original Mat to one column Mat via reshape):
Mat matrix= Mat::zeros(20, 1, CV_32FC1);
vector<float> vec;
matrix.col(0).copyTo(vec);
None of the provided examples here work for the generic case, which are N dimensional matrices. Anything using "rows" assumes theres columns and rows only, a 4 dimensional matrix might have more.
Here is some example code copying a non-continuous N-dimensional matrix into a continuous memory stream - then converts it back into a Cv::Mat
#include <iostream>
#include <cstdint>
#include <cstring>
#include <opencv2/opencv.hpp>
int main(int argc, char**argv)
{
if ( argc != 2 )
{
std::cerr << "Usage: " << argv[0] << " <Image_Path>\n";
return -1;
}
cv::Mat origSource = cv::imread(argv[1],1);
if (!origSource.data) {
std::cerr << "Can't read image";
return -1;
}
// this will select a subsection of the original source image - WITHOUT copying the data
// (the header will point to a region of interest, adjusting data pointers and row step sizes)
cv::Mat sourceMat = origSource(cv::Range(origSource.size[0]/4,(3*origSource.size[0])/4),cv::Range(origSource.size[1]/4,(3*origSource.size[1])/4));
// correctly copy the contents of an N dimensional cv::Mat
// works just as fast as copying a 2D mat, but has much more difficult to read code :)
// see http://stackoverflow.com/questions/18882242/how-do-i-get-the-size-of-a-multi-dimensional-cvmat-mat-or-matnd
// copy this code in your own cvMat_To_Char_Array() function which really OpenCV should provide somehow...
// keep in mind that even Mat::clone() aligns each row at a 4 byte boundary, so uneven sized images always have stepgaps
size_t totalsize = sourceMat.step[sourceMat.dims-1];
const size_t rowsize = sourceMat.step[sourceMat.dims-1] * sourceMat.size[sourceMat.dims-1];
size_t coordinates[sourceMat.dims-1] = {0};
std::cout << "Image dimensions: ";
for (int t=0;t<sourceMat.dims;t++)
{
// calculate total size of multi dimensional matrix by multiplying dimensions
totalsize*=sourceMat.size[t];
std::cout << (t>0?" X ":"") << sourceMat.size[t];
}
// Allocate destination image buffer
uint8_t * imagebuffer = new uint8_t[totalsize];
size_t srcptr=0,dptr=0;
std::cout << std::endl;
std::cout << "One pixel in image has " << sourceMat.step[sourceMat.dims-1] << " bytes" <<std::endl;
std::cout << "Copying data in blocks of " << rowsize << " bytes" << std::endl ;
std::cout << "Total size is " << totalsize << " bytes" << std::endl;
while (dptr<totalsize) {
// we copy entire rows at once, so lowest iterator is always [dims-2]
// this is legal since OpenCV does not use 1 dimensional matrices internally (a 1D matrix is a 2d matrix with only 1 row)
std::memcpy(&imagebuffer[dptr],&(((uint8_t*)sourceMat.data)[srcptr]),rowsize);
// destination matrix has no gaps so rows follow each other directly
dptr += rowsize;
// src matrix can have gaps so we need to calculate the address of the start of the next row the hard way
// see *brief* text in opencv2/core/mat.hpp for address calculation
coordinates[sourceMat.dims-2]++;
srcptr = 0;
for (int t=sourceMat.dims-2;t>=0;t--) {
if (coordinates[t]>=sourceMat.size[t]) {
if (t==0) break;
coordinates[t]=0;
coordinates[t-1]++;
}
srcptr += sourceMat.step[t]*coordinates[t];
}
}
// this constructor assumes that imagebuffer is gap-less (if not, a complete array of step sizes must be given, too)
cv::Mat destination=cv::Mat(sourceMat.dims, sourceMat.size, sourceMat.type(), (void*)imagebuffer);
// and just to proof that sourceImage points to the same memory as origSource, we strike it through
cv::line(sourceMat,cv::Point(0,0),cv::Point(sourceMat.size[1],sourceMat.size[0]),CV_RGB(255,0,0),3);
cv::imshow("original image",origSource);
cv::imshow("partial image",sourceMat);
cv::imshow("copied image",destination);
while (cv::waitKey(60)!='q');
}
Instead of getting image row by row, you can put it directly to an array. For CV_8U type image, you can use byte array, for other types check here.
Mat img; // Should be CV_8U for using byte[]
int size = (int)img.total() * img.channels();
byte[] data = new byte[size];
img.get(0, 0, data); // Gets all pixels
byte * matToBytes(Mat image)
{
int size = image.total() * image.elemSize();
byte * bytes = new byte[size]; //delete[] later
std::memcpy(bytes,image.data,size * sizeof(byte));
}
You can use iterators:
Mat matrix = ...;
std::vector<float> vec(matrix.begin<float>(), matrix.end<float>());
cv::Mat m;
m.create(10, 10, CV_32FC3);
float *array = (float *)malloc( 3*sizeof(float)*10*10 );
cv::MatConstIterator_<cv::Vec3f> it = m.begin<cv::Vec3f>();
for (unsigned i = 0; it != m.end<cv::Vec3f>(); it++ ) {
for ( unsigned j = 0; j < 3; j++ ) {
*(array + i ) = (*it)[j];
i++;
}
}
Now you have a float array. In case of 8 bit, simply change float to uchar, Vec3f to Vec3b and CV_32FC3 to CV_8UC3.
If you know that your img is 3 channel, than you can try this code
Vec3b* dados = new Vec3b[img.rows*img.cols];
for (int i = 0; i < img.rows; i++)
for(int j=0;j<img.cols; j++)
dados[3*i*img.cols+j] =img.at<Vec3b>(i,j);
If you wanna check the (i,j) vec3b you can write
std::cout << (Vec3b)img.at<Vec3b>(i,j) << std::endl;
std::cout << (Vec3b)dados[3*i*img.cols+j] << std::endl;
Since answer above is not very accurate as mentioned in its comments but its "edit queue is full", I have to add correct one-liners.
Mat(uchar, 1 channel) to vector(uchar):
std::vector<uchar> vec = (image.isContinuous() ? image : image.clone()).reshape(1, 1); // data copy here
vector(any type) to Mat(the same type):
Mat m(vec, false); // false(by default) -- do not copy data
I have a cv::Mat but I have already inserted it with some values, how do I clear the contents in it?
If you want to release the memory of the Mat variable use release().
Mat m;
// initialize m or do some processing
m.release();
For a vector of cv::Mat objects you can release the memory of the whole vector with myvector.clear().
std::vector<cv::Mat> myvector;
// initialize myvector ..
myvector.clear(); // to release the memory of the vector
From the docs:
// sets all or some matrix elements to s
Mat& operator = (const Scalar& s);
then we could do
m = Scalar(0,0,0);
to fill with black pixels. Scalar has 4 components, the last - alpha - is optional.
You should call release() function.
Mat img = Mat(Size(width, height), CV_8UC3, Scalar(0, 0, 0));
img.release();
You can release the current contents or assign a new Mat.
Mat m = Mat::ones(1, 5, CV_8U);
cout << "m: " << m << endl;
m.release(); //this will remove Mat m from memory
//Another way to clear the contents is by assigning an empty Mat:
m = Mat();
//After this the Mat can be re-assigned another value for example:
m = Mat::zeros(2,3, CV_8U);
cout << "m: " << m << endl;
You could always redeclare it if you want to empty the mat but keep using the variable. Idk if that's what you want but since the other answer to "clearing" a mat is .release() I'd just go to mention this.
Edit: My bad. I didn't realise how unclear my answer was. I was just answering the question of "how to clear a Mat variable of it's contents". Another person had answered that one can just do .release() to the variable, like for example, the person has a variable like
cv::Mat testMat; and later on it's declared (as the question implied).
One person said that you could do a simple testMat.release(). And if that's what op wants then there you go. But in the off chance that op just wants to clear the declaration of the variable, i just thought to mention that he/she could just re-declare it, like do a simple testMat = *some new information* later on. Also, i mixed up define and declare. My bad
I am working with images in C++ with OpenCV.
I wrote code with an uchar array of two dimensions where I can read pixel values of an image, uploaded with imread in grayscale using .at< uchar>(i,j).
However I would like to do the same thing for color images. Since I know that to access the pixels values I now need .at< Vec3b>(i,j)[0], .at< Vec3b>(i,j)[1] and .at< Vec3b>(i,j)[2], I made a similar Vec3b 2d arrays.
But I don't know how to fill this array with the pixel values. It has to be a 2D array.
I tried:
array[width][height].val[0]=img.at< Vec3b>(i,j)[0]
but that didn't work.
Didn't find an answer on the OpenCV doc or here neither.
Anybody has an idea?
I've included some of my code. I need an array because I already have my whole algorithm working, using an array, for the images in grayscale with only one channel.
The grayscale code is like that:
for(int i=0;i<height;i++){
for(int j=0;j<width;j++){
image_data[i*width+j]=all_images[nb_image-1].at< uchar>(i,j);
}
}
Where I read from:
std::vector< cv::Mat> all_images
each image (I have a long sequence), retrieves the pixel values in the uchar array image_data, and processes them.
I want now to do the same but for RGB images, and I can't manage to read the data pixel of each channel and put them in an array.
This time image_data is a Vec3b array, and the code I'm trying looks like this:
for(int i=0;i<height;i++){
for(int j=0;j<width;j++){
image_data[0][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[2];
image_data[1][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[1];
image_data[2][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[0];
}
}
But this doesn't work, so I am now at loss I don't know how to succeed to fill the image_data array with the values of all three channels, without changing the code structure as this array is then used on my image processing algorithm.
I don't understand exactly what you are trying to do.
You can directly read a color image with:
cv::Mat img = cv::imread("image.jpeg",1);
Your matrix (img) type will be CV_8UC3, then you can access to each pixel like you said using:
img.at<cv::Vec3b>(row,col)[channel].
If you have a 2D array of Vec3b as Vec3b myArray[n][m];
You can access the values like that:
myArray[i][j](k) where k={1,2,3} since Vec3b is a row matrix.
Here is the code I just tested, and it works.
#include <iostream>
#include <cstdlib>
#include <opencv/cv.h>
#include <opencv/highgui.h>
int main(int argc, char**argv){
cv::Mat img = cv::imread("image.jpg",1);
cv::imshow("image",img);
cv::waitKey(0);
cv::Vec3b firstline[img.cols];
for(int i=0;i<img.cols;i++){
// access to matrix
cv::Vec3b tmp = img.at<cv::Vec3b>(0,i);
std::cout << (int)tmp(0) << " " << (int)tmp(1) << " " << (int)tmp(2) << std::endl;
// access to my array
firstline[i] = tmp;
std::cout << (int)firstline[i](0) << " " << (int)firstline[i](0) << " " << (int)firstline[i](0) << std::endl;
}
return EXIT_SUCCESS;
}
In you edited first message, this line is strange:
image_data[0][i*width+j]=all_images[nb_image-1].at<cv::Vec3b>(i,j)[2];
If image data is your colored image, then it should be written like this:
image_data[i][j] = all_images[nb_image-1].at<cv::Vec3b>(i,j);
I have a function that I would like to apply to each pixel in a YUN image (call it src). I would like the output to be saved to a separate image, call it (dst).
I know I can achieve this through pointer arithmetic and accessing the underlying matrix of the image. I was wondering if there was a easier way, say a predefined "map" function that allows me to map a function to all the pixels?
Thanks,
Since I don't know what a YUN image is, I'll assume you know how to convert RGB to that format.
I'm not aware of an easy way to do the map function you mentioned. Anyway, OpenCV has a few predefined functions to do image conversion, including
cvCvtColor(color_frame, gray_frame, CV_BGR2GRAY);
which you might want to take a closer look.
If you would like to do your own, you would need to access each pixel of the image individually, and this code shows you how to do it (the code below skips all kinds of error and return checks for the sake of simplicity):
// Loading src image
IplImage* src_img = cvLoadImage("input.png", CV_LOAD_IMAGE_UNCHANGED);
int width = src_img->width;
int height = src_img->height;
int bpp = src_img->nChannels;
// Temporary buffer to save the modified image
char* buff = new char[width * height * bpp];
// Loop to iterate over each pixel of the original img
for (int i=0; i < width*height*bpp; i+=bpp)
{
/* Perform pixel operation inside this loop */
if (!(i % (width*bpp))) // printing empty line for better readability
std::cout << std::endl;
std::cout << std::dec << "R:" << (int) src_img->imageData[i] <<
" G:" << (int) src_img->imageData[i+1] <<
" B:" << (int) src_img->imageData[i+2] << " ";
/* Let's say you wanted to do a lazy grayscale conversion */
char gray = (src_img->imageData[i] + src_img->imageData[i+1] + src_img->imageData[i+2]) / 3;
buff[i] = gray;
buff[i+1] = gray;
buff[i+2] = gray;
}
IplImage* dst_img = cvCreateImage(cvSize(width, height), src_img->depth, bpp);
dst_img->imageData = buff;
if (!cvSaveImage("output.png", dst_img))
{
std::cout << "ERROR: Failed cvSaveImage" << std::endl;
}
Basically, the code loads a RGB image from the hard disk and performs a grayscale conversion on each pixel of the image, saving it to a temporary buffer. Later, it will create another IplImage with the grayscale data and then it will save it to a file on the disk.