Tesseract init failure - c++

I have encounter an issue while I am using tesseract library. I have successfully compiled leptonica and tesseract libs, with VS2017. Now, I have used these libraries into MFC project, where they are compiled without any error. Here is the code:
tesseract::TessBaseAPI api;
if (0 != api.Init(NULL, _T("eng"), tesseract::OEM_DEFAULT))
{
m_sState.Format(_T("tesseract initialize error"));
return FALSE;
}
nothing complicate, nothing wrong ... but I met 2 problems:
Even this code is executed or not, I have massive memory leak.
Detected memory leaks! Dumping objects -> {65734} normal block at
0x014EEB88, 24 bytes long. Data: < FXDebug > 10 00 00 00 08 00
00 00 46 58 44 65 62 75 67 00 {65733} normal block at 0x014EEB40, 24
bytes long. Data: < FXDebug > 10 00 00 00 08 00 00 00 46 58 44
65 62 75 67 00 {65732} normal block at 0x03880908, 8 bytes long.
Data: < > 10 BE 96 0F 00 00 00 00 {65731} normal block at
0x014EBDA8, 32 bytes long. Data: < N N N > A8 BD 4E 01 A8 BD
4E 01 A8 BD 4E 01 01 01 CD CD {65730} normal block at 0x03880A20, 8
bytes long. Data: < > 04 BE 96 0F 00 00 00 00 {65729} normal
block at 0x014EE990, 24 bytes long.
Every time when this code is executed, the app is going by "tesseract initialize error" route. I don't understand why ...
I have tried to run this project on VS2017 in Win10 64bit, all libraries and my project are compiled as Debug ... if this matter ... can you help me in order to use tesseract to read simple images ?
Last edit:
When I include this code inta a console app:
#include <leptonica/allheaders.h>
#include <tesseract/baseapi.h>
int main()
{
std::cout << "Hello World!\n";
tesseract::TessBaseAPI api;
if (0 != api.Init(NULL, NULL))
{
std::cout << "tesseract initialize error\n";
std::cout << "Last error:" << GetLastError() << std::endl;
}
}
I get the following error messages:
Hello World!
Error in pixReadMemTiff: function not present
Error in pixReadMem: tiff: no pix returned
Error in pixaGenerateFontFromString: pix not made
Error in bmfCreate: font pixa not made
Error opening data file C:\Program Files (x86)\Tesseract-OCR\eng.traineddata
Please make sure the TESSDATA_PREFIX environment variable is set to your "tessdata" directory.
Failed loading language 'eng'
Tesseract couldn't load any languages!
tesseract initialize error
Last error:3
but I have not any "Tesseract-OCR" folder in "Program Files (x86)" ...

Related

gRPC memory leaks

My Visual Studio project, that use gRPC library have memory leaks. After some R&D I made a little project to reproduce the problem and found that don't even need to call any gRPC object in my code.
My steps:
1) Get helloworld.proto from examples
2) Generate C++ files
3) Create C++ project with next code:
#include "helloworld.grpc.pb.h"
void f(){
helloworld::HelloRequest request;
}
int main(){
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
return 0;
}
Part of Output(full dump have 240 lines):
Detected memory leaks!
Dumping objects ->
{1450} normal block at 0x00FD77A0, 16 bytes long.
Data: <`{ t C | > 60 7B FD 00 20 74 FD 00 84 43 CA 00 88 7C CA 00
{1449} normal block at 0x00FECA30, 48 bytes long.
Data: <google.protobuf.> 67 6F 6F 67 6C 65 2E 70 72 6F 74 6F 62 75 66 2E
{1448} normal block at 0x00FEA048, 8 bytes long.
Data: < > 20 C6 FE 00 00 00 00 00
{1447} normal block at 0x00FEC610, 52 bytes long.
Data: < v p" v > B8 76 FC 00 70 22 FE 00 B8 76 FC 00 00 00 CD CD
{1441} normal block at 0x00FEA610, 32 bytes long.
Data: <google.protobuf.> 67 6F 6F 67 6C 65 2E 70 72 6F 74 6F 62 75 66 2E
{1440} normal block at 0x00FE9B78, 8 bytes long.
If I add google::protobuf::ShutdownProtobufLibrary(); line before return 0;, I will have much less output. Only that:
Detected memory leaks!
Dumping objects ->
{160} normal block at 0x00FCD858, 4 bytes long.
Data: < > 18 D6 B9 00
{159} normal block at 0x00FCD618, 4 bytes long.
Data: < > > C8 3E B9 00
{158} normal block at 0x00FCD678, 4 bytes long.
Data: < ? > D0 3F B9 00
Object dump complete.
But if I include some addition generated sources with many and big services and messages, memory dump will be much bigger.
So since I really don't use any gRPC objects directly, only one think I can imagine is that some static objects still alive when VS Memory Dumper start to work.
Is there a way to fix it or a suggestion what I can do with that?
UPD:
I made some addition work around this problem and open new issue on grpc repository bug tracker: https://github.com/grpc/grpc/issues/22506
Problem Description on that issue contain screenshots with leaked allocations callstack and gRPC debug traces.
UPD2:
I found all of them(1.23.0 version). I leaved the detailed comment there: https://github.com/grpc/grpc/issues/22506#issuecomment-618406755

Sudden Memory Leaks Issue

Environment:
Windows 10 Home 10.0.16299
MS Visual Studio Community 2017 Version 15.6.6
Compile Settings: 64 bit - Multi-Byte Char Set
Since a few days, I'm facing a very annoying issue of memory leaks. I tried a lot but the problems still remains.
Suddently, I got the following messages on application exit :
{7196} normal block at 0x000001FD42D0EA90, 66 bytes long.
Data: <` f ) ) > 60 C6 CC 66 F7 7F 00 00 29 00 00 00 29 00 00 00
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\oleinit.cpp(84) : {749} client block at 0x000001FD40F735F0, subtype c0, 104 bytes long.
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\dumpcont.cpp(23) : atlTraceGeneral - a CCmdTarget object at $000001FD40F735F0, 104 bytes long
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\plex.cpp(29) : {747} normal block at 0x000001FD40F78750, 248 bytes long. Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f:\dd\vctools\vc7libs\ship\atlmfc\src\mfc\occmgr.cpp(794) : {746} normal block at 0x000001FD40F5F1D0, 8 bytes long.
Data: < f > F0 BB A3 66 F7 7F 00 00
and also I got the following messages on application exit (I got 12 in fact)
{348} normal block at 0x00000236C88FE4A0, 69 bytes long.
Data: <` > , , > 60 C6 3E BE F6 7F 00 00 2C 00 00 00 2C 00 00 00
So, when I add the following line.....
_CrtSetBreakAlloc(348);
The compiler breaks with the message The xxxx.exe has triggered a breakpoint.
Always a the same place: the line
'pData = (CStringData*)malloc( nTotalSize );' in CAfxStringMgr::Allocate
CStringData* CAfxStringMgr::Allocate( int nChars, int nCharSize ) throw()
{
size_t nTotalSize;
CStringData* pData;
size_t nDataBytes;
ASSERT(nCharSize > 0);
if(nChars < 0)
{
ASSERT(FALSE);
return NULL;
}
nDataBytes = (nChars+1)*nCharSize;
nTotalSize = sizeof( CStringData )+nDataBytes;
pData = (CStringData*)malloc( nTotalSize );
if (pData == NULL)
return NULL;
pData->pStringMgr = this;
pData->nRefs = 1;
pData->nAllocLength = nChars;
pData->nDataLength = 0;
return pData;
}
The problem arised with function that passed CString as reference parameter. Like this one:
int CFoo::GetData(CString strMainData, CString& strData, int nPos, int nLen)
{
strData = strMainData.Mid(nPos + 2, nLen);
return ( nPos + 2 + nLen);
}
Any clue why I got this ? I'm working on that app since several weeks. I never experienced this issue. I modified the function to use char* as parameter and the, a copy to the CSring in the calling function but the problem remains. This behaviour appeared two weeks ago (but I didn't notice it directly). Could it be a compiler settings ?

Memory leak debugging: Ignore false positives (Visual Studio)

I am able to split the Visual Studio memory leak report in false positives and real memory leaks. In my main function, I initialize the memory debugging and generate a real memory leak at the really beginning of my application (never delete pcDynamicHeapStart):
int main()
{
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
char* pcDynamicHeapStart = new char[ 17u ];
strcpy_s( pcDynamicHeapStart, 17u, "DynamicHeapStart" );
...
After my application is finished, the report at the output window contains:
Detected memory leaks!
Dumping objects ->
{15554} normal block at 0x00000000009FCCD0, 80 bytes long.
Data: < > DD DD DD DD DD DD DD DD DD DD DD DD DD DD DD DD
{14006} normal block at 0x00000000009CB360, 17 bytes long.
Data: <DynamicHeapStart> 44 79 6E 61 6D 69 63 48 65 61 70 53 74 61 72 74
{13998} normal block at 0x00000000009BF4B0, 32 bytes long.
Data: < ^ > E0 5E 9B 00 00 00 00 00 F0 7F 9C 00 00 00 00 00
{13997} normal block at 0x00000000009CA4B0, 8 bytes long.
Data: < > 14 00 00 00 00 00 00 00
{13982} normal block at 0x00000000009A12C0, 16 bytes long.
Data: < # > D0 DD D6 40 01 00 00 00 90 08 9C 00 00 00 00 00
...
Object dump complete.
Now look at line "Data: <DynamicHeapStart> 44 79 6E 61 6D 69 63 48 65 61 70 53 74 61 72 74".
All reportet leaks below are false positives, all above are real leaks.
False positives don't mean there is no leak (it could be a static linked library which allocates heap at startup and never frees it), but you cannot eliminate the leak and that's no problem at all.
Since I invented this approach, I never had leaking applications any more.
I wonder if I could hide the false positives (or leaks from statically allocated objects). My report contains hunderts of reported items. I don't want to scroll up every time to see if all was ok.
Maybe a filter for the output window would help?
Maybe there is an option to ignore the first n leaks or to start reporting after a defined id? If so, the number or Id could be adjusted every now and then as a first approach.

Memory leak while trying to replace character in string

I'm generating a big char for future passing to a thread with strcpyand strcat. It was all going ok until I needed to substitute all the occurrences of the space for a comma in one of the strings. I searched for the solution to this here
Problem is, now I have a memory leak and the program exits with this message:
_Dumping objects ->
{473} normal block at 0x0091E0C0, 32 bytes long.
Data: <AMLUH UL619 BKD > 41 4D 4C 55 48 20 55 4C 36 31 39 20 42 4B 44 20
{472} normal block at 0x049CCD20, 8 bytes long.
Data: < > BC ED 18 00 F0 EC 18 00
{416} normal block at 0x082B5158, 1000 bytes long.
Data: <Number of Aircra> 4E 75 6D 62 65 72 20 6F 66 20 41 69 72 63 72 61
{415} normal block at 0x04A0E200, 20 bytes long.
Data: < > 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
{185} normal block at 0x049DA998, 64 bytes long.
Data: < O X8 8 > DC 4F BB 58 38 C5 9A 06 38 D3 88 00 00 00 00 00
PythonPlugin.cpp(76) : {172} normal block at 0x0088D338, 72 bytes long.
Data: < a X F <) > DC 61 BB 58 18 BB 46 06 3C 29 8A 06 CD CD CD CD
Object dump complete._
Here's the code so you can tell me what I'm doing wrong:
Code of the problem:
char* loop_planes(ac){
char *char1=new char[1000];
for(...){
strcpy(char1,"Number of Aircrafts\nHour of simulation\n\n");
string tmp2=fp.GetRoute();
tmp2.replace(tmp2.begin(),tmp2.end()," ",","); #PROBLEM IS IN THIS LINE
const char *tmp3=tmp2.c_str();
strcat(char1,tmp3);
}
return char1;
}
The fp.GetRoute()is a string like this: AMLUH UL619 BKD UM748 RUTOL
Also, now that I'm talking about memory allocation, I don't want any future problems with memory leaks, so when should I delete char1, knowing that the thread is going to call this function?
When you call std::string::replace, the best match is a fumction template whose third and fourth parameters are input iterators. So the string literals you are passing are interpreted as the start and end of a range, when they are not. This leads to undefined behaviour.
You can fix this easily by using the algorithm std::replace instead:
std::replace(tmp2.begin(),tmp2.end(),' ',',');
Note that here the third and fourth parameters are single chars.
The answer from #juanchopanza correctly identifies and fixes the original question, but since you've asked about memory leaks in general, I'd like to additionally suggest that you replace your function with something that doesn't use new or delete or strcpy or strcat.
std::string loop_planes() {
std::string res("Number of Aircrafts\nHour of simulation\n\n");
for (...) {
std::string route = fp.GetRoute();
std::replace(route.begin(), route.end(), ' ',',');
res += route;
}
return res;
}
This doesn't require any explicit memory allocation or deletion and does not leak memory. I also took the liberty of changing the return type from char * to std::string to eliminate messy conversions.

Memory leak when using Google Test on Windows

When I run the following code:
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#define _CRTDBG_MAP_ALLOC
#include <crtdbg.h>
int main(int argc, char **argv)
{
::testing::InitGoogleTest(&argc, argv);
_CrtDumpMemoryLeaks();
return 0;
}
I get the following output:
Detected memory leaks!
Dumping objects ->
{652} normal block at 0x00074CE0, 4 bytes long.
Data: < L > 98 4C 07 00
{651} normal block at 0x00074C98, 12 bytes long.
Data: <, > 2C 03 1B 01 00 00 00 00 00 00 00 00
{650} normal block at 0x00074C50, 8 bytes long.
Data: <hI > 68 49 07 00 00 00 00 00
{649} normal block at 0x00074C10, 4 bytes long.
Data: <t > 74 03 1B 01
{648} normal block at 0x00074BC8, 8 bytes long.
Data: <xK > 78 4B 07 00 00 00 00 00
{647} normal block at 0x00074B70, 28 bytes long.
Data: < K L > BC 01 1B 01 01 CD CD CD C8 4B 07 00 E0 4C 07 00
{646} normal block at 0x00074B28, 8 bytes long.
Data: < I > 18 49 07 00 00 00 00 00
{645} normal block at 0x00074AE0, 8 bytes long.
Data: < I > 04 49 07 00 00 00 00 00
{644} normal block at 0x00074A98, 8 bytes long.
Data: < H > DC 48 07 00 00 00 00 00
{643} normal block at 0x00074A50, 8 bytes long.
Data: < H > C8 48 07 00 00 00 00 00
{642} normal block at 0x00074A08, 8 bytes long.
Data: < H > B4 48 07 00 00 00 00 00
{641} normal block at 0x000749C0, 8 bytes long.
Data: < H > A0 48 07 00 00 00 00 00
{640} normal block at 0x00074E90, 1 bytes long.
Data: < > 00
{639} normal block at 0x00074870, 272 bytes long.
Data: < t N > 20 03 1B 01 CD CD CD CD 74 FA 1B 01 90 4E 07 00
{638} normal block at 0x00074F68, 72 bytes long.
Data: <C:\Users\Baz> 43 3A 5C 55 73 65 72 73 5C 45 42 41 52 47 52 49
{637} normal block at 0x00074E48, 8 bytes long.
Data: <hO G > 68 4F 07 00 47 00 00 00
{616} normal block at 0x00074EE0, 72 bytes long.
Data: <C:\Users\Baz> 43 3A 5C 55 73 65 72 73 5C 45 42 41 52 47 52 49
{595} normal block at 0x00074828, 8 bytes long.
Data: < > F0 F9 1B 01 00 00 00 00
{594} normal block at 0x000747E8, 1 bytes long.
Data: < > 00
{561} normal block at 0x000747A0, 5 bytes long.
Data: <fast > 66 61 73 74 00
{496} normal block at 0x00074760, 1 bytes long.
Data: < > 00
{311} normal block at 0x00074720, 1 bytes long.
Data: < > 00
{282} normal block at 0x000746E0, 2 bytes long.
Data: <* > 2A 00
{253} normal block at 0x00074698, 5 bytes long.
Data: <auto > 61 75 74 6F 00
Object dump complete.
What am I doing wrong?
Adding to the accepted answer, the Google documentation states:
Since the statically initialized Google Test singleton requires allocations on the heap, the Visual C++ memory leak detector will report memory leaks at the end of the program run. The easiest way to avoid this is to use the _CrtMemCheckpoint and _CrtMemDumpAllObjectsSince calls to not report any statically initialized heap objects. See MSDN for more details and additional heap check/debug routines.
This involves calling _CrtMemCheckPoint just after ::testing::InitGoogleTest and then calling _CrtMemDumpAllObjectsSince after RUN_ALL_TESTS(). The main function looks a bit like this:
::testing::InitGoogleTest(&argc, &argv);
// Get a checkpoint of the memory after Google Test has been initialized.
_CrtMemState memoryState = {0};
_CrtMemCheckpoint( &memoryState );
int retval = RUN_ALL_TESTS();
// Check for leaks after tests have run
_CrtMemDumpAllObjectsSince( &memoryState );
return retval;
Unfortunately, if a test fails Google test causes a memory leak, which means this isn't a perfect solution.
You're not doing anything wrong. The 'memory leaks' come from heap allocations of the statically initialized google test singleton class.
Here's the answer from google tests FAQ: How do I suppress the memory leak messages on Windows?