I'm trying to pass a huge Mat image (98304x51968) between openCV and itk using the ITK to openCV Bridge. I have an error :
Insufficient memory (OverFlow for imageSize) in cvIniyImageHeader,
file opencv\modules\core\src\array.cpp line 2961.
Does this mean that opencv has a limit on the size of images?
Good news, since this pull request: handle huge matrices correctly #11505, you should be able to do something like this (code taken from the test):
Mat m(65000, 40000, CV_8U);
ASSERT_FALSE(m.isContinuous());
uint64 i, n = (uint64)m.rows*m.cols;
for( i = 0; i < n; i++ )
m.data[i] = (uchar)(i & 255);
cv::threshold(m, m, 127, 255, cv::THRESH_BINARY);
int nz = cv::countNonZero(m); // FIXIT 'int' is not enough here (overflow is possible with other inputs)
ASSERT_EQ((uint64)nz, n / 2);
Since countNonZero() returns an int, overflow is possible. This means that you should be able to create huge matrix but not all OpenCV function can handle correctly huge matrix.
Regarding your issue, this is the code for ITKImageToCVMat in v5.0a02:
template<typename TInputImageType>
cv::Mat
OpenCVImageBridge::ITKImageToCVMat(const TInputImageType* in, bool force3Channels)
{
// Extra copy, but necessary to prevent memory leaks
IplImage* temp = ITKImageToIplImage<TInputImageType>(in, force3Channels);
cv::Mat out = cv::cvarrToMat( temp, true );
cvReleaseImage(&temp);
return out;
}
As you can see, IplImage image is still used and should be the source of your error.
Your best option currently should be to do the conversion yourself. Maybe something like (I don't know ITK, same input and output type, one channel):
typename ImageType::RegionType region = in->GetLargestPossibleRegion();
typename ImageType::SizeType size = region.GetSize();
unsigned int w = static_cast< unsigned int >( size[0] );
unsigned int h = static_cast< unsigned int >( size[1] );
Mat m(h, w, CV_8UC1, in->GetBufferPointer());
No copy is involved here. If you want to copy, you can do:
Mat m_copy = m.clone();
There seems to be a signed int (typically 32 bit) limitation in IplImage:
From the named .cpp file here's the code snippet that leads to the error:
const int64 imageSize_tmp = (int64)image->widthStep*(int64)image->height;
image->imageSize = (int)imageSize_tmp;
if( (int64)image->imageSize != imageSize_tmp )
CV_Error( CV_StsNoMem, "Overflow for imageSize" );
Which looks like (without checking) image->imageSize is a 32 bit signed int and this part of the code will detect and handle overflows. According to your posted link in the comments, the IplImage "bug" might got fixed (I didn't check that), so MAYBE you can remove this overflow detection step in the OpenCV code for newer IplImage versions, but that's just a guess and has to be confirmed. You'll have to check the type of image->imageSize. If it is a 64 bit type, you can probably change the openCV code to support Mats bigger than 2147483647 bytes.
EDIT: REMARK: I checked the code in OpenCV 3.4 but the code line was the right one, so probably in Version 4.0 there's no change yet.
If your are sure that the IplImage limitation got fixed, you can try this:
const int64 imageSize_tmp = (int64)image->widthStep*(int64)image->height;
image->imageSize = imageSize_tmp; // imageSize isn't 32 bit signed int anymore!
//if( (int64)image->imageSize != imageSize_tmp ) // no overflow detection necessary anymore
// CV_Error( CV_StsNoMem, "Overflow for imageSize" ); // no overflow detection necessary anymore
but better make sure that IplImage's imageSize is 64 bit now ;)
UPDATE: The linked fix in https://github.com/opencv/opencv/pull/7507/commits/a89aa8c90a625c78e40f4288d145996d9cda3599 ADDED the overflow detection, so PROBABLY IplImage still has the 32 bit int imageSize limitation! Be careful here!
Related
Please let me know if this question is too broad, but I am trying to learn some c++ so I thought it would be a good idea to try and recreate some opencv functions.
I am still grabbing the frames or reading the image with opencv's API, but I then want to feed the cv::Mat into my custom function(s), where I modify its data and return it to display it. (For example a function to blur the image, where I pass the original Mat to a padding function, then the output of that to a fn that convolves the padded image with the blurring kernel, and returns the Mat to cv for displaying)
I am a little confused as to what the best (or right) way to do this is. OpenCV functions use a function argument as the return matrix ( cv_foo(cv::Mat src_frame, cv::Mat dst_frame) ) but I am not entirely clear how this works, so I have tried a more familiar approach, something like
cv::Mat my_foo(cv::Mat src_frame) {
// do processing on src_frame data
return dst_frame;
}
where to access the data from src_frame I use uchar* framePtr = frame.data; and to create the dst_frame I followed this suggestion
cv::Mat dst_frame = cv::Mat(n_rows, n_cols, CV_8UC3);
memcpy(dst_frame.data, &new_data_array, sizeof(new_data_array));
I have however encountered various segmentation faults that I find hard to debug, as it seems they occur almost at random (could this be due to the way I am handling the memory management with frame.data or something like that?).
So to come back to my original question, what is the best way to access, modify and pass the data from a cv::Mat in the most consistent way?
I think what would make the most intuitive sense to me (coming from numpy) would be to extract the data array from the original Mat, use that throughout my processing and then repackage it into a Mat before displaying, which would also allow me to feed any custom array into the processing without having to turn it into a Mat, but I am not sure how to best do that (or if it is the right approach).
Thank you!
EDIT:
I will try to highlight the main bug in my code.
One of the functions I am trying to replicate is a conversion from bgr to greyscale, my code looks like this
cv::Mat bgr_to_greyscale(cv::Mat& frame){
int n_rows = frame.rows;
int n_cols = frame.cols;
uchar* framePtr = frame.data;
int channels = frame.channels();
uchar grey_array[n_rows*n_cols];
for(int i=0; i<n_rows; i++){
for(int j=0; j<n_cols; j++){
uchar pixel_b = framePtr[i*n_cols*channels + j*channels];
uchar pixel_g = framePtr[i*n_cols*channels + j*channels + 1];
uchar pixel_r = framePtr[i*n_cols*channels + j*channels + 2];
uchar pixel_grey = 0.299*pixel_r + 0.587*pixel_g + 0.144*pixel_b;
grey_array[i*n_cols + j] = pixel_grey;
}
}
cv::Mat dst_frame = cv::Mat(n_rows, n_cols, CV_8UC1, &grey_array);
return dst_frame;
}
however when I display the result of this function on a sample image I get this result: the bottom part of the image looks like random noise, how can I fix this? what exactly is going wrong in my code?
Thank you!
This question is too broad to answer in any detail, but generally a cv::Mat is a wrapper around the image data akin to the way an std::vector<int> is a wrapper around a dynamically allocated array of int values or an std::string is a wrapper around a dynamically allocated array of characters with one exception: a cv::Mat will not perform a deep copy of the image data on assignment or usage of the copy constructor.
std::vector<int> b = { 1, 2, 3, 4};
std::vector<int> a = b;
// a now contains a copy of b and a[0] = 42 will not effect b.
cv::Mat b = cv::imread( ... );
cv::Mat a = b;
// a and b now wrap the same data.
But that said, you should not be using memcpy et. al. to copy a cv::Mat ... You can make copies with clone or copyTo. From the cv documentation:
Mat F = A.clone();
Mat G;
A.copyTo(G);
I am working with the arm compute library link to convert an opencv application to a more efficient code base.
I would like to import data from an opencv mat, which I've done successfully by doing this.
arm_compute::Image matACL;
matACL.allocator()->init(arm_compute::TensorInfo(mat.cols, mat.rows, arm_compute::Format::U8)); // Initialise tensor's dimensions
matACL.allocator()->import_memory(arm_compute::Memory(mat.data)); //Allocate the image without any padding.
//matACL.allocator()->import_memory(arm_compute::Memory(new cvMatData(mat.data)));
Beware the versions 18.05 and above of the ACL need an implemented memory interface which I have created a gist for. That's the commented line above.
I can run different operations on the image (threshold or gauss for example) and I can see the correct output in an opencv window, but whenever I use the canny edge detector I get a messed up output image. I have issued on github a while ago, but they couldn't find a solution either.
I have implemented the canny edge neon like it is done in the NECannyEdge.cpp file to better understand what is happening. I copy the data of the result into an opencv Mat and preserve the pointer to it like that.
This is how I convert the result back to an OpenCV Mat:
ptr = (unsigned char*)malloc(mat.cols*mat.rows*sizeof(unsigned char));
for(unsigned int z = 0 ; z < 0 ; ++z)
{
for (unsigned int y = 0; y < mat.rows; ++y)
{
memcpy(ptr + z * (mat.cols * mat.rows) + y * mat.cols, matACL.buffer() +
matACL.info()->offset_element_in_bytes(Coordinates(0, y, z)), mat.cols *
sizeof(unsigned char));
}
}
and an alternative:
Window output_window;
output_window.use_tensor_dimensions(shape, Window::DimY);
Iterator output_it(&matACL, output_window);
execute_window_loop(output_window,
[&](const Coordinates & id)
{
memcpy(ptr + id.z() * (mat.cols * mat.rows) + id.y() * mat.cols, output_it.ptr(), mat.cols * sizeof(unsigned char));
}, output_it);
The image sometimes showes a correct canny edge result but most of the time it shows random maybe unfinished data.
I checked if it might be a race condition but the implementation should be single threaded and I can't figure out where the problem is. Does anyone have an idea?
How can I successfully use the data from an opencv image to use in the canny edge detector of the arm compute library? Maybe there is some steps during the import that I missed?
Thanks, Greetings
I found where I was going wrong and developed this function, which creates an OpenCV Mat from an ACL Image:
void ACLImageToMat(arm_compute::Image &aCLImage, cv::Mat &cVImage, std::unique_ptr<uint8_t[]> &cVImageDataPtr)
{
size_t width = aCLImage.info()->valid_region().shape.x();
size_t height = aCLImage.info()->valid_region().shape.y();
cVImageDataPtr = std::make_unique < uint8_t[]>(width*height);
auto ptr_src = aCLImage.buffer();
arm_compute::Window input_window;
input_window.use_tensor_dimensions(aCLImage.info()->tensor_shape());
arm_compute::Iterator input_it(&aCLImage, input_window);
int counter = 0;
arm_compute::execute_window_loop(input_window,
[&](const arm_compute::Coordinates & id)
{
*reinterpret_cast<uint8_t *>(cVImageDataPtr.get() + counter++) = ptr_src[aCLImage.info()->offset_element_in_bytes(id)];
},
input_it);
cVImage = cv::Mat(cVImage.rows, cVImage.cols, CV_8UC1, cVImageDataPtr.get());
}
To initialize this for Canny I did the following:
arm_compute::Image matACL;
matACL.allocator()->init(arm_compute::TensorInfo(eye.cols, eye.rows, arm_compute::Format::U8));
matACL.allocator()->import_memory(arm_compute::Memory(eye.data));
arm_compute::Image matACLCanny;
matACLCanny.allocator()->init(arm_compute::TensorInfo(eye.cols, eye.rows, arm_compute::Format::U8));
arm_compute::NECannyEdge canny {};
canny.configure(&matACL, &matACLCanny, 300, 150, 3, 1, arm_compute::BorderMode::REPLICATE);
matACLCanny.allocator()->allocate();
canny.run();
The IMPORTANT thing is to call the allocate function of the output image AFTER configuring the canny edge detector. I found this somewhere in the ACL documentation a while ago, but I can't remember where exactly.
I hope this helps someone who stumbles across converting images between the ACL and OpenCV!
after reading an image of unknown depth and channel number i want to access its pixels one by one.
on opencv 1.x the code goes:
IplImage * I = cvLoadImage( "myimage.tif" );
CvScalar pixel = cvGet2D( I, y, x );
but on opencv 2.x the cv::Mat.at() method demands that i know the image's type:
cv::Mat I = cv::imread( "myimage.tif" );
if( I.depth() == CV_8U && I.channels() == 3 )
cv::Vec3b pixel = I.at<cv::Vec3b>( x, y );
else if( I.depth() == CV_32F && I.channels() == 1 )
float pixel = I.at<cv::float>( x, y );
is there a function resembling cvGet2D that can receive cv::Mat and return cv::Scalar without knowing the image's type in compile time?
For someone who is really a beginner in C++ ...
... and/or a hacker who just need to save mere seconds of code typing to finish off the last project
cv::Mat mat = ...; // something
cv::Scalar value = cv::mean(mat(cv::Rect(x, y, 1, 1)));
(Disclaimer: This code is only slightly less wasteful than a young man dying for a revolutionary cause.)
The short answer is no. There's no such function in the C++ API.
The rationale behind this is performance. cv::Scalar (and CvScalar) is the same thing as cv::Vec<double,4>. So, for any Mat type other than CV_64FC4, you'll need a conversion to obtain cv::Scalar. Moreover, this method would be a giant switch, like in your example (you have only 2 branches).
But I suppose quite often this function would be convenient, so why not to have it? My guess is that people would tend to overuse it, resulting in really bad performance of their algorithms. So, OpenCV makes it just a tiny bit less convenient to access individual pixels, in order to force client code to use statically typed methods. This isn't such a big deal convenient-wise, since more often than not, you actually know the type statically and it's a really big deal performance-wise. So, I consider it a good trade-off.
I had the same issue, I just wanted to test something quickly and performance was not an issue. But all parts of the code uses cv::Mat(). What I did was the following
Mat img; // My input mat, initialized elsewhere
// Pretty fast operation, Will only create an iplHeader pointing to the data in the mat
// No data is copied and no memory is mallocated.
// The Header resides on the stack (note its type is "IplImage" not "IplImage*")
IplImage iplImg = (IplImage)img;
// Then you may use the old (slow converting) legacy-functions if you like
CvScalar s = cvGet2D( &iplImg, y, x );
Just a warning: you are using cvLoadImage and imread with default flags. This means that any image you read will be a 8-bit 3-channel image. Use appropriate flags (IMREAD_ANYDEPTH / IMREAD_ANYCOLOR) if you want to read image as is (which seems to be your intention).
I am trying to acquire single images from a camera, do some processing to them and release the used memory. I've been doing it for quite some time with a code similar to the one that follows:
char* img_data = new char[ len ]; // I retrieve len from the camera.
// Grab the actual image from the camera:
// It fills the previous buffer with the image data.
// It gives width and height.
CvSize size;
size.width = width;
size.height = height;
IplImage* img = cvCreateImageHeader( size, 8, 1 );
img->imageData = img_data;
// Do the processing
cvReleaseImage( &img );
This code runs fine. I have recently read here (in the imageData description) that I shouldn't be assigning data directly to img->imageData but use SetData() instead, like so:
cvSetData( img, img_data, width );
However, when I do it like this, I get a Segmentation fault at the cvReleaseImage() call.
What am I doing wrong?
Thank you.
EDIT: I have tried to compile and run the program that #karlphillip suggested and I DO get a segmentation fault using cvSetData but it runs fine when assigning the data directly.
I'm using Debian 6 and OpenCV 2.3.1.
I had this problem as well, but believe it was fixed from what I gathered in
this comment; namely to use cvReleaseImageHeader() and not cvReleaseImage().
For example:
unsigned int width = 100;
unsigned int height = 100;
unsigned int channels = 1;
unsigned char* imageData = (unsigned char*)malloc(width*height*channels);
// set up the image data
IplImage *img = cvCreateImageHeader(cvSize(width, height), IPL_DEPTH_8U, channels);
cvSetData(img, imageData, width*channels);
// use img
cvReleaseImageHeader(&img);
// free(imageData) when finished
The problem is that you are allocating memory the C++ way (using new), while using the C interface of OpenCV, which tries to free that memory block with free() inside cvReleaseImage(). Memory allocations of C and C++ can't be mixed together.
Solution: use malloc() to allocate the memory:
char* img_data = (char*) malloc(len * sizeof(char));
// casting the return of malloc might not be necessary
// if you are using a C++ compiler
EDIT: (due to OP's comment that is still crashing)
Something else that you are not showing us is crashing your application! I seriously recommend that you write a complete/minimal application that reproduces the problem you are having.
The following application works fine in my Mac OS X with OpenCV 2.3.1.
#include <cv.h>
#include <highgui.h>
int main()
{
char* img_data = (char*) malloc(625);
CvSize size;
size.width = 25;
size.height = 25;
IplImage* img = cvCreateImageHeader( size, 8, 1 );
//img->imageData = img_data;
cvSetData( img, img_data, size.width );
cvReleaseImage( &img );
return 0;
}
Generally, if you allocate a header and set data manually, you should deallocate only the header and free the data yourself:
// allocate img_data
IplImage* img = cvCreateImageHeader( size, 8, 1 );
img->imageData = img_data;
cvReleaseImageHeader( &img ); // frees only the header
// free img_data
If you call cvReleaseImage, it will also try to free the data, so you then rely on the OpenCV implementation to do it, which failed in your case because it uses free and you allocated with new which are incompatible with each other.
The other option is to allocate with malloc and call cvReleaseImage.
I want to add a detail to this discussion. There seems to be some sort of bug in
cvReleaseHeader() function.
You have to release the header first and then free the image data.
If you do it the other way, i.e., free image data and then call cvReleaseHeader() function, then it deceptively works. But internally it leaks memory and after sometime your application will crash.
I am using OpenCV and saving as a jpeg using the cvSaveImage function, but I am unable to find the Jpeg compression factor used by this.
What's cvSaveImage(...)'s Jpeg Compression factor
How can I pass the compression factor when using cvSaveImage(...)
Currently cvSaveImage() is declared to take only two parameters:
int cvSaveImage( const char* filename, const CvArr* image );
However, the "latest tested snapshot" has:
#define CV_IMWRITE_JPEG_QUALITY 1
#define CV_IMWRITE_PNG_COMPRESSION 16
#define CV_IMWRITE_PXM_BINARY 32
/* save image to file */
CVAPI(int) cvSaveImage( const char* filename, const CvArr* image,
const int* params CV_DEFAULT(0) );
I've been unable to find any documentation, but my impression from poking through this code is that you would build an array of int values to pass in the third parameter:
int p[3];
p[0] = CV_IMWRITE_JPEG_QUALITY;
p[1] = desired_quality_value;
p[2] = 0;
I don't know how the quality value is encoded, and I've never tried this, so caveat emptor.
Edit:
Being a bit curious about this, I downloaded and built the latest trunk version of OpenCV, and was able to confirm the above via this bit of throwaway code:
#include "cv.h"
#include "highgui.h"
int main(int argc, char **argv)
{
int p[3];
IplImage *img = cvLoadImage("test.jpg");
p[0] = CV_IMWRITE_JPEG_QUALITY;
p[1] = 10;
p[2] = 0;
cvSaveImage("out1.jpg", img, p);
p[0] = CV_IMWRITE_JPEG_QUALITY;
p[1] = 100;
p[2] = 0;
cvSaveImage("out2.jpg", img, p);
exit(0);
}
My "test.jpg" was 2,054 KB, the created "out1.jpg" was 182 KB and "out2.jpg" was 4,009 KB.
Looks like you should be in good shape assuming you can use the latest code available from the Subversion repository.
BTW, the range for the quality parameter is 0-100, default is 95.
OpenCV now has a parameter to set jpeg quality. I'm not sure exactly when this was introduced, but presumably sometime after 2.0.
const int JPEG_QUALITY = 80;
Mat src;
// put data in src
vector<int> params;
params.push_back(CV_IMWRITE_JPEG_QUALITY);
params.push_back(JPEG_QUALITY);
imwrite("filename.jpg", src, params);
If you are using C++0x, you can use this shorter notation:
imwrite("filename.jpg", src, vector<int>({CV_IMWRITE_JPEG_QUALITY, JPEG_QUALITY});
You can probably find this by poking around in the source code here: http://opencvlibrary.svn.sourceforge.net/viewvc/opencvlibrary/
You can't, as the function does not accept such a parameter. If you want to control the compression then the simplest method I can think of is first saving your image as a bitmap with cvSaveImage() (or another lossless format of your choice) and then use another image library to convert it to a JPEG of the desired compression factor.
imwrite("filename.jpeg",src,(vector<int>){CV_IMWRITE_JPEG_QUALITY, 20});
filename.jpeg will be output File name
src be source image read containing variable
(vector<int>) typecasting
{CV_IMWRITE_JPEG_QUALITY, 20} an array of elements to be passed as Param_ID - and Param_value in imwrite function