I am trying to acquire single images from a camera, do some processing to them and release the used memory. I've been doing it for quite some time with a code similar to the one that follows:
char* img_data = new char[ len ]; // I retrieve len from the camera.
// Grab the actual image from the camera:
// It fills the previous buffer with the image data.
// It gives width and height.
CvSize size;
size.width = width;
size.height = height;
IplImage* img = cvCreateImageHeader( size, 8, 1 );
img->imageData = img_data;
// Do the processing
cvReleaseImage( &img );
This code runs fine. I have recently read here (in the imageData description) that I shouldn't be assigning data directly to img->imageData but use SetData() instead, like so:
cvSetData( img, img_data, width );
However, when I do it like this, I get a Segmentation fault at the cvReleaseImage() call.
What am I doing wrong?
Thank you.
EDIT: I have tried to compile and run the program that #karlphillip suggested and I DO get a segmentation fault using cvSetData but it runs fine when assigning the data directly.
I'm using Debian 6 and OpenCV 2.3.1.
I had this problem as well, but believe it was fixed from what I gathered in
this comment; namely to use cvReleaseImageHeader() and not cvReleaseImage().
For example:
unsigned int width = 100;
unsigned int height = 100;
unsigned int channels = 1;
unsigned char* imageData = (unsigned char*)malloc(width*height*channels);
// set up the image data
IplImage *img = cvCreateImageHeader(cvSize(width, height), IPL_DEPTH_8U, channels);
cvSetData(img, imageData, width*channels);
// use img
cvReleaseImageHeader(&img);
// free(imageData) when finished
The problem is that you are allocating memory the C++ way (using new), while using the C interface of OpenCV, which tries to free that memory block with free() inside cvReleaseImage(). Memory allocations of C and C++ can't be mixed together.
Solution: use malloc() to allocate the memory:
char* img_data = (char*) malloc(len * sizeof(char));
// casting the return of malloc might not be necessary
// if you are using a C++ compiler
EDIT: (due to OP's comment that is still crashing)
Something else that you are not showing us is crashing your application! I seriously recommend that you write a complete/minimal application that reproduces the problem you are having.
The following application works fine in my Mac OS X with OpenCV 2.3.1.
#include <cv.h>
#include <highgui.h>
int main()
{
char* img_data = (char*) malloc(625);
CvSize size;
size.width = 25;
size.height = 25;
IplImage* img = cvCreateImageHeader( size, 8, 1 );
//img->imageData = img_data;
cvSetData( img, img_data, size.width );
cvReleaseImage( &img );
return 0;
}
Generally, if you allocate a header and set data manually, you should deallocate only the header and free the data yourself:
// allocate img_data
IplImage* img = cvCreateImageHeader( size, 8, 1 );
img->imageData = img_data;
cvReleaseImageHeader( &img ); // frees only the header
// free img_data
If you call cvReleaseImage, it will also try to free the data, so you then rely on the OpenCV implementation to do it, which failed in your case because it uses free and you allocated with new which are incompatible with each other.
The other option is to allocate with malloc and call cvReleaseImage.
I want to add a detail to this discussion. There seems to be some sort of bug in
cvReleaseHeader() function.
You have to release the header first and then free the image data.
If you do it the other way, i.e., free image data and then call cvReleaseHeader() function, then it deceptively works. But internally it leaks memory and after sometime your application will crash.
Related
I am working on a project, where I want to process my images using C++ OpenCV.
For simplicity's sake, I just want to convert Uint8List to cv::Mat and back.
Following this tutorial, I managed to make a pipeline that doesn't crash the app. Specifically:
I created a function in a .cpp that takes the pointer to my Uint8List, rawBytes, and encodes it as a .jpg:
int encodeIm(int h, int w, uchar *rawBytes, uchar **encodedOutput) {
cv::Mat img = cv::Mat(h, w, CV_8UC3, rawBytes); //CV_8UC3
vector<uchar> buf;
cv:imencode(".jpg", img, buf); // save output into buf. Note that Dart Image.memory can process either .png or .jpg, which is why we're doing this encoding
*encodedOutput = (unsigned char *) malloc(buf.size());
for (int i=0; i < buf.size(); i++)
(*encodedOutput)[i] = buf[i];
return (int) buf.size();
}
Then I wrote a function in a .dart that calls my c++ encodeIm(int h, int w, uchar *rawBytes, uchar **encodedOutput):
//allocate memory heap for the image
Pointer<Uint8> imgPtr = malloc.allocate(imgBytes.lengthInBytes);
//allocate just 8 bytes to store a pointer that will be malloced in C++ that points to our variably sized encoded image
Pointer<Pointer<Uint8>> encodedImgPtr = malloc.allocate(8);
//copy the image data into the memory heap we just allocated
imgPtr.asTypedList(imgBytes.length).setAll(0, imgBytes);
//c++ image processing
//image in memory heap -> processing... -> processed image in memory heap
int encodedImgLen = _encodeIm(height, width, imgPtr, encodedImgPtr);
//
//retrieve the image data from the memory heap
Pointer<Uint8> cppPointer = encodedImgPtr.elementAt(0).value;
Uint8List encodedImBytes = cppPointer.asTypedList(encodedImgLen);
//myImg = Image.memory(encodedImBytes);
return encodedImBytes;
//free memory heap
//malloc.free(imgPtr);
//malloc.free(cppPointer);
//malloc.free(encodedImgPtr); // always frees 8 bytes
}
Then I linked c++ with dart via:
final DynamicLibrary nativeLib = Platform.isAndroid
? DynamicLibrary.open("libnative_opencv.so")
: DynamicLibrary.process();
final int Function(int height, int width, Pointer<Uint8> bytes, Pointer<Pointer<Uint8>> encodedOutput)
_encodeIm = nativeLib
.lookup<NativeFunction<Int32 Function(Int32 height, Int32 width,
Pointer<Uint8> bytes, Pointer<Pointer<Uint8>> encodedOutput)>>('encodeIm').asFunction();
And finally I show the result in Flutter via:
Image.memory(...)
Now, the pipeline doesn't crash, which means I haven't goofed up memory handling completely, but it doesn't return the original image either, which means I did mess up somewhere.
Original image:
Pipeline output:
Thanks to Richard Heap's guidance in the comments, I managed to fix the pipeline by changing my matrix definition from
cv::Mat img = cv::Mat(h, w, CV_8UC3, rawBytes);
to
vector<uint8_t> buffer(rawBytes, rawBytes + inBytesCount);
Mat img = imdecode(buffer, IMREAD_COLOR);
where inBytesCount is the length of imgBytes.
I'm very new at C++ and I'm trying to create a DLL which uses OpenCV library.
My DLL gets a raw image from other application and creates a MAT from the application's memory buffer. I send the buffer's address, which has a raw image, to the DLL and get raw image to OpenCV. This part works.
But after processing image with OpenCV, I can't write raw image to same memory address.
This is the code snippet that I've tried:
fn_export double createImage(char* address double width, double height) {
unsigned char* pBuffer = (unsigned char*)address;
memcpy(&pBuffer,&address, sizeof(pBuffer));
cv::Mat img = cv::Mat(height,width, CV_8UC4, pBuffer);
cv::imshow("Original", img);
memcpy(&address, &img.data[0], sizeof(address));
return 1;
}
char* address is memory address from my application. Other application's buffer doesn't change this way. Anybody has any advice about this situation?
Ok. I solved this issue ;
Mat img = Mat(height, width, CV_8UC4, address);
cv::imshow("Image from GM", img);
// same image copy to buffer back;
memcpy(&address[0], &img.data[0], width*height*4.);
I am new to C++ (aswell as Cuda and OpenCV), so I am sorry for any mistakes on my side.
I have an existing code that uses Cuda. Recently it worked with .png (that was decoded) as an input but now I use a camera to generate live images. These images are the new input for the code. Here it is:
using namespace cv;
INT height = 2160;
INT width = 3840;
Mat image(height, width, CV_8UC3);
size_t pitch;
uint8_t* image_gpu;
// capture image
VideoCapture camera(0);
camera.set(CAP_PROP_FRAME_WIDTH, width);
camera.set(CAP_PROP_FRAME_HEIGHT, height);
camera.read(image);
// here I checked if image is definitly still a CV_8UC3 Mat with the initial height and width; and it is
cudaMallocPitch(&image_gpu, &pitch, width * 4, height);
// here I use cv::Mat::data to get the pointer to the data of the image:
cudaMemcpy2D(image_gpu, pitch, image.data, width*4, width*4, height, cudaMemcpyHostToDevice);
The code compiles but I get an "Exception Thrown" at the last line (cudaMemcpy2D) with the following error code:
Exception thrown at 0x00007FFE838D6660 (nvcuda.dll) in realtime.exe: 0xC0000005: Access violation reading location 0x000001113AE10000.
Google did not give me an answer and I do not know ho to proceed from here on.
Thanks for any hints!
A rather generic way to copy an OpenCV Mat to the device memory allocated using cudaMallocPitch is to utilize the step member of the Mat object. Also, while allocating device memory, you must have a visual intuition in mind that how the device memory will be allocated and how the Mat object will be copied to it. Here is a simple example demonstrating the procedure for a video frame captured using VideoCapture.
#include<iostream>
#include<cuda_runtime.h>
#include<opencv2/opencv.hpp>
using std::cout;
using std::endl;
size_t getPixelBytes(int type)
{
switch(type)
{
case CV_8UC1:
case CV_8UC3:
return sizeof(uint8_t);
break;
case CV_16UC1:
case CV_16UC3:
return sizeof(uint16_t);
break;
case CV_32FC1:
case CV_32FC3:
return sizeof(float);
break;
case CV_64FC1:
case CV_64FC3:
return sizeof(double);
break;
default:
return 0;
}
}
int main()
{
cv::VideoCapture cap(0);
cv::Mat frame;
if(cap.grab())
{
cap.retrieve(frame);
}
else
{
cout<<"Cannot read video"<<endl;
return -1;
}
uint8_t* gpu_image;
size_t gpu_pitch;
//Get number of bytes occupied by a single pixel. Although VideoCapture mostly returns CV_8UC3 type frame thus pixelBytes is 1 , but just in case.
size_t pixelBytes = getPixelBytes(frame.type());
//Number of actual data bytes occupied by a row.
size_t frameRowBytes = frame.cols * frame.channels * pixelBytes;
//Allocate pitch linear memory on device
cudaMallocPitch(&gpu_image, &gpu_pitch, frameRowBytes , frame.rows);
//Copy memory from frame to device mempry
cudaMemcpy2D(gpu_image, gpu_pitch, frame.ptr(), frame.step, frameRowBytes, frame.rows, cudaMemcpyHostToDevice);
//Rest of the code ...
return 0;
}
Disclaimer:
Code is written in the browser. Not tested yet. Please add CUDA error checking as required
I'm trying to pass a huge Mat image (98304x51968) between openCV and itk using the ITK to openCV Bridge. I have an error :
Insufficient memory (OverFlow for imageSize) in cvIniyImageHeader,
file opencv\modules\core\src\array.cpp line 2961.
Does this mean that opencv has a limit on the size of images?
Good news, since this pull request: handle huge matrices correctly #11505, you should be able to do something like this (code taken from the test):
Mat m(65000, 40000, CV_8U);
ASSERT_FALSE(m.isContinuous());
uint64 i, n = (uint64)m.rows*m.cols;
for( i = 0; i < n; i++ )
m.data[i] = (uchar)(i & 255);
cv::threshold(m, m, 127, 255, cv::THRESH_BINARY);
int nz = cv::countNonZero(m); // FIXIT 'int' is not enough here (overflow is possible with other inputs)
ASSERT_EQ((uint64)nz, n / 2);
Since countNonZero() returns an int, overflow is possible. This means that you should be able to create huge matrix but not all OpenCV function can handle correctly huge matrix.
Regarding your issue, this is the code for ITKImageToCVMat in v5.0a02:
template<typename TInputImageType>
cv::Mat
OpenCVImageBridge::ITKImageToCVMat(const TInputImageType* in, bool force3Channels)
{
// Extra copy, but necessary to prevent memory leaks
IplImage* temp = ITKImageToIplImage<TInputImageType>(in, force3Channels);
cv::Mat out = cv::cvarrToMat( temp, true );
cvReleaseImage(&temp);
return out;
}
As you can see, IplImage image is still used and should be the source of your error.
Your best option currently should be to do the conversion yourself. Maybe something like (I don't know ITK, same input and output type, one channel):
typename ImageType::RegionType region = in->GetLargestPossibleRegion();
typename ImageType::SizeType size = region.GetSize();
unsigned int w = static_cast< unsigned int >( size[0] );
unsigned int h = static_cast< unsigned int >( size[1] );
Mat m(h, w, CV_8UC1, in->GetBufferPointer());
No copy is involved here. If you want to copy, you can do:
Mat m_copy = m.clone();
There seems to be a signed int (typically 32 bit) limitation in IplImage:
From the named .cpp file here's the code snippet that leads to the error:
const int64 imageSize_tmp = (int64)image->widthStep*(int64)image->height;
image->imageSize = (int)imageSize_tmp;
if( (int64)image->imageSize != imageSize_tmp )
CV_Error( CV_StsNoMem, "Overflow for imageSize" );
Which looks like (without checking) image->imageSize is a 32 bit signed int and this part of the code will detect and handle overflows. According to your posted link in the comments, the IplImage "bug" might got fixed (I didn't check that), so MAYBE you can remove this overflow detection step in the OpenCV code for newer IplImage versions, but that's just a guess and has to be confirmed. You'll have to check the type of image->imageSize. If it is a 64 bit type, you can probably change the openCV code to support Mats bigger than 2147483647 bytes.
EDIT: REMARK: I checked the code in OpenCV 3.4 but the code line was the right one, so probably in Version 4.0 there's no change yet.
If your are sure that the IplImage limitation got fixed, you can try this:
const int64 imageSize_tmp = (int64)image->widthStep*(int64)image->height;
image->imageSize = imageSize_tmp; // imageSize isn't 32 bit signed int anymore!
//if( (int64)image->imageSize != imageSize_tmp ) // no overflow detection necessary anymore
// CV_Error( CV_StsNoMem, "Overflow for imageSize" ); // no overflow detection necessary anymore
but better make sure that IplImage's imageSize is 64 bit now ;)
UPDATE: The linked fix in https://github.com/opencv/opencv/pull/7507/commits/a89aa8c90a625c78e40f4288d145996d9cda3599 ADDED the overflow detection, so PROBABLY IplImage still has the 32 bit int imageSize limitation! Be careful here!
I am trying to find a way to avoid the use of cvCreateImage inside of a while loop, because I have realized that this will cause a memory leak.
I would like something like this - albeit with out the memory leak.
I don`t know why this code is not working.
The code below is what I thought that would work however it breaks when run.
if((capture = cvCreateCameraCapture(0)) == NULL) {
printf("connect cam first\n");
return -1;
}
IplImage *detecImg = cvCreateImage( cvSize(WIDTH, HEIGHT), 8, 1 );
IplImage *frameImage = NULL;
IplImage *notImage = NULL;
while(1){
cvWaitKey(1);
cvSplit(frameImage, a, b, c, NULL);
//detect objec from a,b,c.....output is "detecImg"
cvSetImageROI(detectImg, Roi); //Roi is changing depends on detection result
notImage=cvCloneImage(detectImg);//cvCloneImage,cvCopy not working...
cvNot(notImage, notImage);
copyNotImg = cvCloneImage(notImage);
... continues ...
}
If I use this code below it works fine but leaks a little memory.
if((capture = cvCreateCameraCapture(0)) == NULL) {
printf("connect cam first\n");
return -1;
}
IplImage *detecImg = cvCreateImage(cvSize(WIDTH, HEIGHT), 8, 1);
IplImage *frameImage = NULL;
IplImage *notImage = NULL;
while(1){
cvWaitKey(1);
cvSplit(frameImage, a, b, c, NULL);
//detect objec from a,b,c.....output is "detecImg"
cvSetImageROI( detectImg, Roi); //Roi is changing depends on detection result
notImage=cvCreateImage( cvSize(Roi.width, Roi.height), 8, 1 );
cvNot(notImage, notImage);
copyNotImg= cvCloneImage(notImage);
... continues ...
}
Any insight would appreciated.
Any images allocated with cvCreateImage need to be released with cvReleaseImage. Are you releasing all images?
Alternatively, you could use the modern C++ OpenCV api which handles all memory allocation and deallocation for you.
In your first piece of code, you are trying to copy image to a null image. Image must be allocated sufficient memory resources. i.e.
IplImage *notImage = NULL;
notImage=cvCloneImage(detectImg);
Here notImage is empty. It has no memory assigned to. hence your code breaks.
whereas in second case memory is assigned to notImage before copying.
to avoid memory leaks try this
IplImage* notImage=cvCloneImage(detectImg);
but remember to release the image in the end.
Instead of cvCloneImage, First create image then use cvCopyImage to copy the image..
And new version of opencv provides better and easier implimentations..
http://www.cprogramdevelop.com/4885055/
the above link contains some info regarding memory leaks
Hope this helps