How do I initialize a matchTemplateBuff in OpenCV? - c++

I'm writing a pattern matching code using OpenCV with CUDA on Mac OS X. After ~ 70 frames, it slows down a lot. Using mach_absolute_time() I've been able to track the culprit (Initially I thought it was a disk access issue):
gpu::matchTemplate(currentFrame,
correlationTargets[i]->correlationImage,
temporaryImage,
CV_TM_CCOEFF_NORMED);
I believe this is caused by some memory issue inside matchTemplate. After a lot of search, I found the matchTemplateBuf structure which is presumably for memory reuse. Since the problem seems memory releated, I think using this may be the solution. However, the following code crashes:
gpu::MatchTemplateBuf mtBuff;
[...]
for(...) {
gpu::matchTemplate(correlationTargets[i]->croppedImage,
correlationTargets[i]->correlationImage,
correlationTargets[i]->maxAllocation,
CV_TM_CCOEFF_NORMED, mtBuff);
With error:
OpenCV Error: Gpu API call (Unknown error code [Code = 9999]) in convolve, file /Users/sermarc/Downloads/opencv-2.4-3.8/modules/gpu/src/imgproc.cpp, line 1431 libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/sermarc/Downloads/opencv-2.4-3.8/modules/gpu/src/imgproc.cpp:1431: error: (-217) Unknown error code [Code = 9999] in function convolve
I believe this is because the matchTemplateBuff is not properly initialized. However, I cannot find any information or example that shows it being set on a valid state.
The code works with:
gpu::matchTemplate(correlationTargets[i]->croppedImage,
correlationTargets[i]->correlationImage,
correlationTargets[i]->maxAllocation,
CV_TM_CCOEFF_NORMED);

Related

OpenCV DirectX 11 Interoperability

I need to read an image in OpenCV, send it to DirectX, do some processing, and then send the output image back to OpenCV. I am completely new to this, and was working with DirectX 11, therefore, I decided to refer this sample which demonstrates OpenCV and DirectX interoperability. There are options for both GPU and CPU modes, but right now, I plan to use only the CPU. The program builds without any errors, but returns the following runtime error everytime:
Exception thrown at 0x0000000000000000 in Direct3D Win32 my_project_name.exe: 0xC0000005: Access violation executing location 0x0000000000000000. occurred
I looked for solutions everywhere but couldn't find any. Since this is a sample, I guess many people might have used this.
Here are the lines where I'm getting an exception.
if (cv::ocl::haveOpenCL())
{
m_oclCtx = cv::directx::ocl::initializeContextFromD3D11Device(m_pD3D11Dev); //this is the line which throws the exception
}
m_oclDevName = cv::ocl::useOpenCL() ?
cv::ocl::Context::getDefault().device(0).name() :
"No OpenCL device";
On hovering the cursor above m_pD3D11Dev, this message is displayed by the intellisense:
m_pD3D11Dev | 0x000001f5a286c618 <No type information available in symbol file for d3d11.dll>
I am guessing that there is some error in my setup or some other linker error since this is a sample code provided by OpenCV(which i am assuming is obviously going to run). Any help or guidance would be appreciated.
I'm using DirectX 11 and OpenCV 3.4.4 to build(x64) this in Visual Studio 19. I also tried building(x64) it in Visual Studio 17, but the results were same.
Thanks in advance.

malloc error while using Tesseract with OpenCL option enabled

I have compiled Tesseract 3.04.00 with the OpenCL option enabled. While trying to extract text from an image using GetUTF8Text(), there is a malloc error, a memory leak I suppose.
I found a patch for a memory leak error that was previously added, however, the version I have compiled already has the patch added. I am not sure as to why the memory leak has occurred.
This is the output I am getting:
[DS] Profile read from file (tesseract_opencl_profile_devices.dat).
[DS] Device[1] 1:Intel(R) Core(TM) i5-4250U CPU # 1.30GHz score is 14049349632.000000
[DS] Device[2] 1:HD Graphics 5000 score is 14049349632.000000
[DS] Device[3] 0:(null) score is 21474836480.000000
[DS] Selected Device[2]: "HD Graphics 5000" (OpenCL)
ACP(15114,0x7fff795bf300) malloc: *** mach_vm_map(size=1125865547108352) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Has anyone faced this problem before? How do I fix this?
I'm not familiar with Tesseract, but I suspect the patch that you referred was for a different issues.
Looking in the output details, It looks like you are using an apple computer. Please take a look at link below which contains some 'how to' for installing and using Tesseract on Mac OS X:
https://ryanfb.github.io/etc/2015/03/18/experimenting_with_opencl_for_tesseract.html
Hope this is useful to fix the issue.
Anyway, the error "can't allocate region" means that there is no memory space left. Indeed has been required a huge quantity of memory (size=1125865547108352, about 1.126 Petabyte). To figure out what is really happening, you should profile the code using a profiling tool like gdb (indeed the error message says "set a breakpoint in malloc_error_break to debug"), or at least upload a little program that can be used to reproduce the issue.
You ran out of memory. (error code=3) "can't allocate region" means malloc tried to allocate more memory than was available..
Maybe you can try to restrict recognition to a sub-rectangle of the image - call SetRectangle(left, top, width, height) after SetImage. Each SetRectangle clears the recogntion results so multiple rectangles can be recognized with the same image. E.g.
api->SetRectangle(30, 86, 590, 100);
Without seeing your code, I'd guess either you are not destroying objects and releasing memory or there is a bug with that version. To check if it's the former you can use a memory leak detection tool/library, to check if it is the later you can debug it or just try using a different version..

Error message : "Assertion 't = find_next_time_event( m )' failed at pulse/mainloop.c" ?...(C++)

Ok so I'm using CodeBlocks for programming in C++ . I have so "random" (it doesn't happen everytime, and I'm not able to predict when it happens) error message which makes the program crash, it says :
Assertion 't = find_next_time_event( m )' failed at pulse/mainloop.c:721,
function calc_next_timeout() . Aborting .
Aborted (core dumped) .
Process returned 134 (0x86)
Core dumped is when I should not have deleted some pointer, am I right?
The thing I don't understand is what is before "Aborted, core dumepd" . Can it guide me to which kind of error I made ?
Or is it a problem with CodeBlocks (I doubt it , but that could be great :p)
*I don't put code here because I just want information about what could theorically create this kind of message . Then I'll search and if I've problems with finding the error(s), I'll put some code here ;) *
It is telling you that an assertion failed on line 721 of the file pulse/mainloop.c that is part of your source code.
An assertion is typically placed to check invariants or preconditions/postconditions. Taking a precondition as an example, this is saying "this expression has to be true in order for the code below to work correctly".
By inspecting the condition (at line 721 of mainloop.c) and understanding why it was not true in your case, you should be able to find an error in your code that lead to the failed assertion.
This isn't really a solution but this issue actually has to do with PulseAudio. I assume OP is using Linux, possibly Ubuntu, which is when this error occurs. I sometimes get the same thing when using a program written in Python. There is a noted bug about this issue in the PulseAudio Launchpad bugs.

error retrieving background image from BackgroundSubtractorMOG2

I'm trying to get the background image from BackgroundSubtractorMOG2:
bg->getBackgroundImage(back);
but I get a Thread 1 SIGABRT (which as a c++ n00b puzzles me)
and this error:
OpenCV Error: Assertion failed (nchannels == 3) in getBackgroundImage, file /Users/hm/Downloads/OpenCV-2.4.4/modules/video/src/bgfg_gaussmix2.cpp, line 579
libc++abi.dylib: terminate called throwing an exception
(lldb)
I'm not sure what the problem is, suspecting it's something to do with the nmixtures paramater, but I've left that as the default(3). Any hints ?
It looks like you need to use 3 channel images rather than grayscale. Make sure the image type you are using is CV_8UC3 or if you are reading from a file use cv::imread('path/to/file') with no additional arguments.

OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize

I have successfully written a video processing program. I used ubuntu and Netbeans for programming. When I run this program on netbeans it runs perfectly and gives expected output.
I built executable file of this program both in debug and release mode and tried to run them in the command line. Now I get the following error. But Netbeans doesn't complain about this. Could someone point out what might be the problem?
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file /home/<user>/trunk/opencv/modules/core/src/array.cpp, line 1238
terminate called after throwing an instance of 'cv::Exception'
what(): /home/<user>/trunk/opencv/modules/core/src/array.cpp:1238: error: (-5) Array should be CvMat or IplImage in function cvGetSize
thank you in advance
Can you check if the input argument to cvGetSize is:
a NULL pointer? What is the result of querying/retrieving the frame?
a CvSeq?
a 1- or 3-dimensional array?
Usually it is the first.
That's the way OpenCV talks to you - it's more often the runtime exception than a compiler error.