pass Mat object C++ to Unity - c++

I'd like to return a Mat object to Unity from c++ code. However i get access violation error at c++ part like that
Unity Editor [version: Unity 2017.3.0f3_a9f86dcd79df]
SaliencyCV.dll caused an Access Violation (0xc0000005)
in module SaliencyCV.dll at 0033:270027f0.
Error occurred at 2018-03-06_235212.
C:\Program Files\Unity\Editor\Unity.exe, run by Dilara.
43% memory in use.
16266 MB physical memory [9199 MB free].
18698 MB paging file [9861 MB free].
134217728 MB user address space [134185466 MB free].
Read from location 990d0000 caused an access violation.
Here is c++ code:
uchar* cppMethod(uchar* frameData, int WIDTH, int HEIGHT, int* rows, int* cols)
{
Mat img(HEIGHT, WIDTH, CV_8UC3);
img.data = frameData;
flip(img, img, 0);
Mat result = calculateSaliency(img);
*rows = result.rows;
*cols = result.cols;
int length = result.rows * result.cols * 3;
uchar* tmpArr = result.data;
uchar* resultArray = new uchar[length];
for (int i = 0; i < length; i++)
{
resultArray[i] = tmpArr[i];
}
return resultArray;
}
Can someone help me?

You should call the correct Mat constructor, which accepts external data pointer, to make the object not release/destruct the corresponding memory location data points to. You can read about this behaviour in Mat::release().
The problem with your code is that
Mat img(HEIGHT, WIDTH, CV_8UC3) allocates a memory block of type CV_8UC3 of size HEIGHT*WIDTH, which is not used (because you are changing the data member variable to point to a different memory location, anyways),
At function exit, img is destructed, which results in a call to release(), which in turn destructs frameData, which is not the intended behaviour.
Change your first two lines to read
Mat img(HEIGHT, WIDTH, CV_8UC3, frameData);
And if you are passing resultArray to C#, where you are most likely not managing the pointed-to-memory's lifetime, you would be most likely having memory leaks. #Programmer has already suggested in his answer to your previous question that you should allocate the memory in C#, pass it to C++, and write in-place in the C++ side.
In short, you should have something like:
#include <algorithm>
void cppMethod(uchar *frameData, uchar *out, const int WIDTH, const int HEIGHT,
int *rows, int *cols) {
/* this constructor will not manage frameData's lifetime */
Mat img(HEIGHT, WIDTH, CV_8UC3, frameData);
/* in-place operation */
flip(img, img, 0);
/* local variable --- it will be destructed properly */
Mat result = calculateSaliency(img);
/* well-defined if rows and cols are scalars passed by reference */
*rows = result.rows;
*cols = result.cols;
/* make sure length will not overflow */
int length = result.rows * result.cols * 3;
/* you don't need this */
// uchar *tmpArr = result.data;
/* you sholuld NOT do this */
// uchar *resultArray = new uchar[length];
// use std::copy from <algorithm>
// for (int i = 0; i < length; i++) {
// resultArray[i] = tmpArr[i];
// }
std::copy(result.data, result.data + length, out);
// return resultArray;
}

Related

How do you calloc a global variable to the size of input parameters in C++?

I'm using a portaudio callback where because of interrupts and such its suggested you don't allocate or free in the callback.
float *out_pcm = (float *)calloc(sizeof(float), frames);
My C++ is very basic but right or wrong how do you set array shape (frames) of a global variable?
Can you create a global variable
float *out_pcm;
Then in a function or main set the index size with something like
out_pcm = calloc(sizeof(float), frames);
?
void paFunc(const float* in, float* out, long frames, void* data){
auto start = high_resolution_clock::now();
paConfig *config = (paConfig*)data;
------------------------------------------------------------------
struct paConfig config;
config.channels = channels;
config.margin = margin;
config.out_pcm = (float *)calloc(sizeof(float), frames);
config.tdoa = (int *)calloc(sizeof(int), channels);
config.in_data = (float *)calloc(sizeof(float), frames * channels);
config.beam_data = (float *)calloc(sizeof(float), frames * channels);
Pa a(paFunc, channels, 1, sample_rate, frames, &config);
Apols yeah should use std::vector out_pcm(frames); but things are slowly getting better

How to allocate memory using C++ new instead of C malloc

I am now working on homework. There is one thing confused me and I need your advice.
The problem is quite simple and basic about memory allocation. I am currently studying the book C++ Primer after I learn C language. So I prefer to use new and delete to do the memory allocation which failed me for this problem. Here is the problem. The function getNewFrameBuffer is used to allocate allocate memory for framebuffer : (sizeof)Pixel x width x height, please note that Pixel is a user defined data type. And then return the pointer of the allocated memory. It works fine when I use malloc() function as below:
char* m_pFrameBuffer;
int width = 512, int height = 512;
//function call
getNewFrameBuffer(&m_pBuffer, width, height);
//function implementation using malloc
int getNewFrameBuffer(char **framebuffer, int width, int height)
{
*framebuffer = (char*)malloc(sizeof(Pixel) * width *height);
if(framebuffer == NULL)
return 0;
return 1;
}
However, when I try using new keyword to allocate memory it will cause an unexpected termination of the program. Here is my code:
int getNewFrameBuffer(char **framebuffer, int width, int height)
{
framebuffer = new char*[sizeof(Pixel) * width *height];
if(framebuffer == NULL)
return 0;
return 1;
}
What's wrong with my code? Thanks a lot, everyone:)
You should allocate using new char not new char* as new char* will allocate that many pointers.
This has lead you to remove the * from *frameBuffer = meaning that the caller's frameBuffer parameter will not be changed.
Change the line to
*framebuffer = new char[sizeof(Pixel) * width *height];
*framebuffer = new char[sizeof(Pixel) * width *height];
Note the *;

opencv Mat deallocation memory corruption

I am struggling with release version of my opencv wrapper function.
The function code runs fine, but upon function block completition, memory access violation happens.
This problem does not appear in debug mode. The segfault happens upon freeing the heap.
int Myfunc(Arr1D_floatHdl FeatArrHdl, IMAQ_Image *img, someparams
*Params)
{
ImageInfo *Info = NULL;
//IplImage *CVImage = NULL;
Info = (ImageInfo*)img->address;
CheckImage(Info, Info);
//CVImage = cvCreateImageHeader( cvSize(Info->xRes, Info->yRes), IPL_DEPTH_8U, 4);
//CVImage->imageData = (char*)Info->imageStart;
//CVImage->widthStep = Info->xRes*sizeof(IPL_DEPTH_8U);
cv::Mat BGRAimg = cv::Mat(Info->yRes, Info->xRes, CV_8UC4, (char*)Info->imageStart, sizeof(CV_8UC4)*Info->xRes);
//cv::Mat BGRAimg(CVImage);
//cv::Mat BGRAimg = imread( "MyImg.png", cv::IMREAD_COLOR );
cv::Mat GREYimg;
cv::cvtColor(BGRAimg, GREYimg, CV_BGR2GRAY);
Here is the code where I create Mat object from user supplied data.
I tried to create IplImage first (commented version in code) and use Mat constructor with IplImage argument, but eneded up with the same problem.
I know I am doing something very wrong during the Mat construction, since manualy loading the image from disk does not cause the issue.
After creating the Mat object, all its parameters are correct and the image is fine. When comparing with the grey matrix created of it, it has refcount NULL, which I have read is perfectly fine since it is supposed to keep user data intact.
Please help.
UPDATE to give more information
Thank you for suggestions. I am obviously prone to create such errors, I am new to C/C++.
Unfortunately, the access violation still persists.
Here is the complete wrapper function as it is. I tried to narrow down the problem, and skipping the HOG.compute function I do no longer get memory corruption. Skipping the memcpy acrobatics in the end, I still get the memory corrupted.
int GetHOGFeatures(Arr1D_floatHdl FeatArrHdl, IMAQ_Image *img, HogParams *Params) //returns -1 on HOG window parameters missmatch
{
ImageInfo *Info = NULL;
Info = (ImageInfo*)img->address;
CheckImage(Info, Info);
cv::Mat BGRAimg = cv::Mat(Info->yRes, Info->xRes, CV_8UC4, (char*)Info->imageStart, sizeof(cv::Vec4b)*Info->xRes);
cv::Mat GREYimg;
cv::cvtColor(BGRAimg, GREYimg, CV_BGRA2GRAY);
//set params into hog object
cv::HOGDescriptor hog;
hog.winSize = cv::Size(Params->winsize_width, Params->winsize_height);
hog.blockSize = cv::Size(Params->blocksize_width, Params->blocksize_height);
hog.blockStride = cv::Size(Params->blockstride_x, Params->blockstride_y);
hog.cellSize = cv::Size(Params->cellsize_width, Params->cellsize_height);
hog.nbins = Params->nBins;
hog.derivAperture = Params->derivAperture;
hog.winSigma = Params->win_sigma;
hog.L2HysThreshold = Params->threshold_L2hys;
hog.gammaCorrection = (Params->gammaCorrection != 0);
MgErr error = mgNoErr;
cv::vector<float> ders;
cv::vector<cv::Point> locations;
try
{
//winstride - step of window
//padding - borderpadding
//raises exception with incorrect params ... todo replace trycatch with paramchecking
hog.compute(GREYimg, ders, cv::Size(Params->winstride_x, Params->winstride_y), cv::Size(0,0), locations);
}
catch(...)
{
return -1;
}
//copy out the data into LabView
error = DSSetHandleSize(FeatArrHdl, sizeof(int32_t) + ders.size()*sizeof(float));
memcpy((*FeatArrHdl)->Arr, ders.data(), sizeof(float)*ders.size());
(*FeatArrHdl)->dimSize = ders.size();
return error;
}
I am running this function with following parameters:
Window size 32
Block size 16
Cell size 8
Block stride 8
Window stride 32
the rest of parameters is default.
I decided to include the look of the Mat object once constructed, I hope it can help.
This is the BGRA constructed from user data. It is supposed to be 640*640 BGRA
BGRAimg {flags=1124024344 dims=2 rows=640 ...} cv::Mat
flags 1124024344 int
dims 2 int
rows 640 int
cols 640 int
data 0x12250040 "e9%" unsigned char *
101 'e' unsigned char
refcount 0x00000000 int *
CXX0030: Error: expression cannot be evaluated
datastart 0x12250040 "e9%" unsigned char *
101 'e' unsigned char
dataend 0x123e0040 "" unsigned char *
0 unsigned char
datalimit 0x123e0040 "" unsigned char *
0 unsigned char
allocator 0x00000000 cv::MatAllocator *
__vfptr CXX0030: Error: expression cannot be evaluated
size {p=0x0012f44c } cv::Mat::MSize
p 0x0012f44c int *
640 int
step {p=0x0012f474 buf=0x0012f474 } cv::Mat::MStep
p 0x0012f474 unsigned int *
2560 unsigned int
buf 0x0012f474 unsigned int [2]
[0] 2560 unsigned int
[1] 4 unsigned int
And the Grey image that enters the HOG descriptors calculator
GREYimg {flags=1124024320 dims=2 rows=640 ...} cv::Mat
flags 1124024320 int
dims 2 int
rows 640 int
cols 640 int
refcount 0x0c867ff0 int *
1 int
dataend 0x0c867ff0 "" unsigned char *
1 '' unsigned char
datalimit 0x0c867ff0 "" unsigned char *
1 '' unsigned char
allocator 0x00000000 cv::MatAllocator *
__vfptr CXX0030: Error: expression cannot be evaluated
size {p=0x0012f40c } cv::Mat::MSize
p 0x0012f40c int *
640 int
step {p=0x0012f434 buf=0x0012f434 } cv::Mat::MStep
p 0x0012f434 unsigned int *
640 unsigned int
buf 0x0012f434 unsigned int [2]
[0] 640 unsigned int
[1] 1 unsigned int
I had to ommit the data and datastart fields, because unlike for the BGRA image MSVS actually shows some data in it.
UPDATE2
changed multi-threaded for multi-threaded DLL in project properities, and the issue is gone.
The problem persisted even if I was using code like this :
int dim = 32;
BYTE *mydata = NULL;
mydata = (BYTE*)malloc(sizeof(BYTE)*dim*dim);
Mat img;
img = Mat(Size(dim,dim), CV_8U, mydata, dim*sizeof(BYTE));
Might this indicate my code was not the cause, and this is somewhat opencv x windows runtime issue, or did I just hide the problem ?
UPDATE3
After reading something about microsoft runtime, I decided to check how was my opencv built, and it is using /MD, and I was building with /MT. I hope this was the cause.
this might not work like you expect:
sizeof(CV_8UC4)*Info->xRes
CV_8UC4 is an enum, not a type, you can't use sizeof() here.
if your data is continuous, you probably might just skip the stride param completely, or:
sizeof(Vec4b)*Info->xRes
another thing:
your BGRAimg has 4 channels, right ? so, use
cv::cvtColor(BGRAimg, GREYimg, CV_BGRA2GRAY);
instead

OpenCV Error: insufficient memory, in function call

I have a function looks like this:
void foo(){
Mat mat(50000, 200, CV_32FC1);
/* some manipulation using mat */
}
Then after several loops (in each loop, I call foo() once), it gives an error:
OpenCV Error: insufficient memory when allocating (about 1G) memory.
In my understanding, the Mat is local and once foo() returns, it is automatically de-allocated, so I am wondering why it leaks.
And it leaks on some data, but not all of them.
Here is my actual code:
bool VidBOW::readFeatPoints(int sidx, int eidx, cv::Mat &keys, cv::Mat &descs, cv::Mat &codes, int &barrier) {
// initialize buffers for keys and descriptors
int num = 50000; /// a large number
int nDims = 0; /// feature dimensions
if (featName == "STIP")
nDims = 162;
Mat descsBuff(num, nDims, CV_32FC1);
Mat keysBuff(num, 3, CV_32FC1);
Mat codesBuff(num, 3000, CV_64FC1);
// move overlapping codes from a previous window to buffer
int idxPre = -1;
int numPre = keys.rows;
int numMov = 0; /// number of overlapping points to move
for (int i = 0; i < numPre; ++i) {
if (keys.at<float>(i, 0) >= sidx) {
idxPre = i;
break;
}
}
if (idxPre > 0) {
numMov = numPre - idxPre;
keys.rowRange(idxPre, numPre).copyTo(keysBuff.rowRange(0, numMov));
codes.rowRange(idxPre, numPre).copyTo(codesBuff.rowRange(0, numMov));
}
// the starting row in code matrix where new codes from the updated features to add in
barrier = numMov;
// read keys and descriptors from feature file
int count = 0; /// number of new points that are read in buffers
if (featName == "STIP")
count = readSTIPFeatPoints(numMov, eidx, keysBuff, descsBuff);
// update keys, descriptors and codes matrix
descsBuff.rowRange(0, count).copyTo(descs);
keysBuff.rowRange(0, numMov+count).copyTo(keys);
codesBuff.rowRange(0, numMov+count).copyTo(codes);
// see if reaching the end of a feature file
bool flag = false;
if (feof(fpfeat))
flag = true;
return flag;
}
You don't post the code that calls your function, so I can't tell whether this is a true memory leak. The Mat objects that you allocate inside readFeatPoints() will be deallocated correctly, so there are no memory leaks that I can see.
You declare Mat codesBuff(num, 3000, CV_64FC1);. With num = 5000, this means you're trying to allocate 1.2 gigabytes of memory in one big block. You also copy some of this data to codes with the line:
codesBuff.rowRange(0, numMov+count).copyTo(codes);
If the value of numMove + count changes between iterations, this will cause reallocation of the data buffer in codes. If the value is large enough, you may also be eating up a significant amount of memory that persists across iterations of your loop. Both of these things may be leading to heap fragmentation. If at any point there doesn't exist a 1.2 GB chunk of memory waiting around, an insufficient memory error occurs, which is what you have experienced.

Create CImage from Byte array

I need to create a CImage from a byte array (actually, its an array of unsigned char, but I can cast to whatever form is necessary). The byte array is in the form "RGBRGBRGB...". The new image needs to contain a copy of the image bytes, rather than using the memory of the byte array itself.
I have tried many different ways of achieving this -- including going through various HBITMAP creation functions, trying to use BitBlt -- and nothing so far has worked.
To test whether the function works, it should pass this test:
BYTE* imgBits;
int width;
int height;
int Bpp; // BYTES per pixel (e.g. 3)
getImage(&imgBits, &width, &height, &Bpp); // get the image bits
// This is the magic function I need!!!
CImage img = createCImage(imgBits, width, height, Bpp);
// Test the image
BYTE* data = img.GetBits(); // data should now have the same data as imgBits
All implementations of createCImage() so far have ended up with data pointing to an empty (zero filled) array.
CImage supports DIBs quite neatly and has a SetPixel() method so you could presumably do something like this (uncompiled, untested code ahead!):
CImage img;
img.Create(width, height, 24 /* bpp */, 0 /* No alpha channel */);
int nPixel = 0;
for(int row = 0; row < height; row++)
{
for(int col = 0; col < width; col++)
{
BYTE r = imgBits[nPixel++];
BYTE g = imgBits[nPixel++];
BYTE b = imgBits[nPixel++];
img.SetPixel(row, col, RGB(r, g, b));
}
}
Maybe not the most efficient method but I should think it is the simplest approach.
Use memcpy to copy the data, then SetDIBits or SetDIBitsToDevice depending on what you need to do. Take care though, the scanlines of the raw image data are aligned on 4-byte boundaries (IIRC, it's been a few years since I did this) so the data you get back from GetDIBits will never be exactly the same as the original data (well it might, depending on the image size).
So most likely you will need to memcpy scanline by scanline.
Thanks everyone, I managed to solve it in the end with your help. It mainly involved #tinman and #Roel's suggestion to use SetDIBitsToDevice(), but it involved a bit of extra bit-twiddling and memory management, so I thought I'd share my end-point here.
In the code below, I assume that width, height and Bpp (Bytes per pixel) are set, and that data is a pointer to the array of RGB pixel values.
// Create the header info
bmInfohdr.biSize = sizeof(BITMAPINFOHEADER);
bmInfohdr.biWidth = width;
bmInfohdr.biHeight = -height;
bmInfohdr.biPlanes = 1;
bmInfohdr.biBitCount = Bpp*8;
bmInfohdr.biCompression = BI_RGB;
bmInfohdr.biSizeImage = width*height*Bpp;
bmInfohdr.biXPelsPerMeter = 0;
bmInfohdr.biYPelsPerMeter = 0;
bmInfohdr.biClrUsed = 0;
bmInfohdr.biClrImportant = 0;
BITMAPINFO bmInfo;
bmInfo.bmiHeader = bmInfohdr;
bmInfo.bmiColors[0].rgbBlue=255;
// Allocate some memory and some pointers
unsigned char * p24Img = new unsigned char[width*height*3];
BYTE *pTemp,*ptr;
pTemp=(BYTE*)data;
ptr=p24Img;
// Convert image from RGB to BGR
for (DWORD index = 0; index < width*height ; index++)
{
unsigned char r = *(pTemp++);
unsigned char g = *(pTemp++);
unsigned char b = *(pTemp++);
*(ptr++) = b;
*(ptr++) = g;
*(ptr++) = r;
}
// Create the CImage
CImage im;
im.Create(width, height, 24, NULL);
HDC dc = im.GetDC();
SetDIBitsToDevice(dc, 0,0,width,height,0,0, 0, height, p24Img, &bmInfo, DIB_RGB_COLORS);
im.ReleaseDC();
delete[] p24Img;
Here is a simpler solution. You can use GetPixelAddress(...) instead of all this BITMAPHEADERINFO and SedDIBitsToDevice. Another problem I have solved was with 8-bit images, which need to have the color table defined.
CImage outImage;
outImage.Create(width, height, channelCount * 8);
int lineSize = width * channelCount;
if (channelCount == 1)
{
// Define the color table
RGBQUAD* tab = new RGBQUAD[256];
for (int i = 0; i < 256; ++i)
{
tab[i].rgbRed = i;
tab[i].rgbGreen = i;
tab[i].rgbBlue = i;
tab[i].rgbReserved = 0;
}
outImage.SetColorTable(0, 256, tab);
delete[] tab;
}
// Copy pixel values
// Warining: does not convert from RGB to BGR
for ( int i = 0; i < height; i++ )
{
void* dst = outImage.GetPixelAddress(0, i);
const void* src = /* put the pointer to the i'th source row here */;
memcpy(dst, src, lineSize);
}