I have compiled Tesseract 3.04.00 with the OpenCL option enabled. While trying to extract text from an image using GetUTF8Text(), there is a malloc error, a memory leak I suppose.
I found a patch for a memory leak error that was previously added, however, the version I have compiled already has the patch added. I am not sure as to why the memory leak has occurred.
This is the output I am getting:
[DS] Profile read from file (tesseract_opencl_profile_devices.dat).
[DS] Device[1] 1:Intel(R) Core(TM) i5-4250U CPU # 1.30GHz score is 14049349632.000000
[DS] Device[2] 1:HD Graphics 5000 score is 14049349632.000000
[DS] Device[3] 0:(null) score is 21474836480.000000
[DS] Selected Device[2]: "HD Graphics 5000" (OpenCL)
ACP(15114,0x7fff795bf300) malloc: *** mach_vm_map(size=1125865547108352) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Has anyone faced this problem before? How do I fix this?
I'm not familiar with Tesseract, but I suspect the patch that you referred was for a different issues.
Looking in the output details, It looks like you are using an apple computer. Please take a look at link below which contains some 'how to' for installing and using Tesseract on Mac OS X:
https://ryanfb.github.io/etc/2015/03/18/experimenting_with_opencl_for_tesseract.html
Hope this is useful to fix the issue.
Anyway, the error "can't allocate region" means that there is no memory space left. Indeed has been required a huge quantity of memory (size=1125865547108352, about 1.126 Petabyte). To figure out what is really happening, you should profile the code using a profiling tool like gdb (indeed the error message says "set a breakpoint in malloc_error_break to debug"), or at least upload a little program that can be used to reproduce the issue.
You ran out of memory. (error code=3) "can't allocate region" means malloc tried to allocate more memory than was available..
Maybe you can try to restrict recognition to a sub-rectangle of the image - call SetRectangle(left, top, width, height) after SetImage. Each SetRectangle clears the recogntion results so multiple rectangles can be recognized with the same image. E.g.
api->SetRectangle(30, 86, 590, 100);
Without seeing your code, I'd guess either you are not destroying objects and releasing memory or there is a bug with that version. To check if it's the former you can use a memory leak detection tool/library, to check if it is the later you can debug it or just try using a different version..
Related
I need to read an image in OpenCV, send it to DirectX, do some processing, and then send the output image back to OpenCV. I am completely new to this, and was working with DirectX 11, therefore, I decided to refer this sample which demonstrates OpenCV and DirectX interoperability. There are options for both GPU and CPU modes, but right now, I plan to use only the CPU. The program builds without any errors, but returns the following runtime error everytime:
Exception thrown at 0x0000000000000000 in Direct3D Win32 my_project_name.exe: 0xC0000005: Access violation executing location 0x0000000000000000. occurred
I looked for solutions everywhere but couldn't find any. Since this is a sample, I guess many people might have used this.
Here are the lines where I'm getting an exception.
if (cv::ocl::haveOpenCL())
{
m_oclCtx = cv::directx::ocl::initializeContextFromD3D11Device(m_pD3D11Dev); //this is the line which throws the exception
}
m_oclDevName = cv::ocl::useOpenCL() ?
cv::ocl::Context::getDefault().device(0).name() :
"No OpenCL device";
On hovering the cursor above m_pD3D11Dev, this message is displayed by the intellisense:
m_pD3D11Dev | 0x000001f5a286c618 <No type information available in symbol file for d3d11.dll>
I am guessing that there is some error in my setup or some other linker error since this is a sample code provided by OpenCV(which i am assuming is obviously going to run). Any help or guidance would be appreciated.
I'm using DirectX 11 and OpenCV 3.4.4 to build(x64) this in Visual Studio 19. I also tried building(x64) it in Visual Studio 17, but the results were same.
Thanks in advance.
Sorry I'm new to Greenhill's. I'm using MULTI 6.1.6 and my language of choice is C++.
I have a problem when try to use simulator to initiate an object of a class bigger than 1M in size using new.
Class_Big* big_obj;
Class_Big = new Class_Big();
Class_Small* Small_obj;
Small_obj = new Class_Small();
if sizeOf(Class_Big) > 1MB it simply never call the class constructor, return NULL and go to the next instruction (Class_Small* Small_obj;) and creates the next object correctly. If I scope out some variables on the Class_Big to make its size < 1MB the code works fine and the object created.
I added both
MemoryPoolSize="0x200000"
HeapSize="0x200000"
to my xml file.
Another error I get in building phase If I used a lib have a big class:
intex: error: Not enough RAM for request.
intex: fatal: Integrate failed.
Error: build failed
Can you help with it?
Thanks
To specify memory sizes for the Heap and memory pool, in the MULTI GUI go to the .int file (it can be found under the .gpj dropdown when it is expanded) and double click on it to edit it. Then right-click inside the purple box and go to "Edit". Go to the "Attributes" tab and you can modify the memory pool size and heap size to be larger.
Alternatively you can just edit the .int file in a text editor, but if you want to use the gui to set these follow the above steps.
Also from their manual:
"Check the .bsp file in use. The memory declared with the
MinimumAddress/MaximumAddress keywords must match your board's memory.
If it does not, modify these keywords as needed. If the memory
declared in the .bsp file does match the board, you must modify your
application to use less memory."
Additionally, check the default.ld and you can set the values for the RAM limits there. Look at __INTEGRITY_RamLimit and the other values there. Hope this helps!
With INTEGRITY you are in total control of how much memory is used for each partition. It is a static configuration. Everything, code stack heap you name it, comes out of that. So if you have a bunch of code, automatics, etc in the partition then a memory allocation may fail if you ask for too much. Try increasing the size.
For the first part of the problem Basically I should have modified the "VirtualHeapSize" on the .ld component file.
Second part still try to figure it out.
I'm writing a pattern matching code using OpenCV with CUDA on Mac OS X. After ~ 70 frames, it slows down a lot. Using mach_absolute_time() I've been able to track the culprit (Initially I thought it was a disk access issue):
gpu::matchTemplate(currentFrame,
correlationTargets[i]->correlationImage,
temporaryImage,
CV_TM_CCOEFF_NORMED);
I believe this is caused by some memory issue inside matchTemplate. After a lot of search, I found the matchTemplateBuf structure which is presumably for memory reuse. Since the problem seems memory releated, I think using this may be the solution. However, the following code crashes:
gpu::MatchTemplateBuf mtBuff;
[...]
for(...) {
gpu::matchTemplate(correlationTargets[i]->croppedImage,
correlationTargets[i]->correlationImage,
correlationTargets[i]->maxAllocation,
CV_TM_CCOEFF_NORMED, mtBuff);
With error:
OpenCV Error: Gpu API call (Unknown error code [Code = 9999]) in convolve, file /Users/sermarc/Downloads/opencv-2.4-3.8/modules/gpu/src/imgproc.cpp, line 1431 libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/sermarc/Downloads/opencv-2.4-3.8/modules/gpu/src/imgproc.cpp:1431: error: (-217) Unknown error code [Code = 9999] in function convolve
I believe this is because the matchTemplateBuff is not properly initialized. However, I cannot find any information or example that shows it being set on a valid state.
The code works with:
gpu::matchTemplate(correlationTargets[i]->croppedImage,
correlationTargets[i]->correlationImage,
correlationTargets[i]->maxAllocation,
CV_TM_CCOEFF_NORMED);
Below is some simple OpenCV code to create a frame from a video file and run the SURF feature detector and extractor on the frame. When I run this code on Linux and OSX it runs fine, however on windows I am getting a heap corruption error on the two commented lines.
VideoCapture capture(vidFilename.c_str());
Mat frame;
capture >> frame;
SurfFeatureDetector *detector = new SurfFeatureDetector(minHessian);
vector<KeyPoint> frameKeypoints;
detector->detect(frame, frameKeypoints);
delete(detector); // Heap Corruption Detected
SurfDescriptorExtractor *extractor = new SurfDescriptorExtractor();
Mat frameDescriptors;
extractor->compute(frame, frameKeypoints, frameDescriptors);
delete(extractor); // Heap Corruption Detected
I have no clue what could cause this in the code. I am using VS 2010 to compile the code, is there something in VS that could cause this to happen?
As mentioned above that you do not get any exception related to heap corruption does not mean that it did not happen.There would be problem in your code and not in VS or compiler.My previous post on the similar article would be useful here as well.
https://stackoverflow.com/a/22074401/2724703
Probably you should also try to run some dynamic tool(Valgrind) on your linux. There is high possibility that, you would find the same bug using Valgrind as well.
These dynamic tools would give you the root cause of these problem.
I have a program that is creating a map file, its able to do that call just fine, m_hMap = CreateFileMapping(m_hFile,0,dwProtect,0,m_dwMapSize,NULL);, but when the subsequent function call to MapViewOfFile(m_hMap,dwViewAccess,0,0,0), I get an error code of 8, which is ERROR_NOT_ENOUGH_MEMORY, or error string "error Not enough storage is available to process this command".
So I'm not totally understanding what the MapViewOfFile does for me, and how to fix the situation.
some numbers...
m_dwMapSize = 453427200
dwProtect = PAGE_READWRITE;
dwViewAccess = FILE_MAP_ALL_ACCESS;
I think my page size is 65536
In case of very large file and to read it, it is recommended to read it in small pieces and then process each piece. And MapViewOfFile function is used to map a piece in memory.
Look at http://msdn.microsoft.com/en-us/library/windows/desktop/aa366761(v=vs.85).aspx need offset to do its job properly i.e. in case you want to read a very large file in pieces. Mostly due to fragmentation and related reason very large memory request fails.
If you are working on a 64 bit processor then the system will allocate a total of 4GB memory with bit set LargeaddressAware.
go to Configuration properties->linker->system. in Enable largeaddressware: check
Yes /LARGEADDRESSAWARE and check.