C++ memory allocation use Under Green Hills INTEGRITY - c++

Sorry I'm new to Greenhill's. I'm using MULTI 6.1.6 and my language of choice is C++.
I have a problem when try to use simulator to initiate an object of a class bigger than 1M in size using new.
Class_Big* big_obj;
Class_Big = new Class_Big();
Class_Small* Small_obj;
Small_obj = new Class_Small();
if sizeOf(Class_Big) > 1MB it simply never call the class constructor, return NULL and go to the next instruction (Class_Small* Small_obj;) and creates the next object correctly. If I scope out some variables on the Class_Big to make its size < 1MB the code works fine and the object created.
I added both
MemoryPoolSize="0x200000"
HeapSize="0x200000"
to my xml file.
Another error I get in building phase If I used a lib have a big class:
intex: error: Not enough RAM for request.
intex: fatal: Integrate failed.
Error: build failed
Can you help with it?
Thanks

To specify memory sizes for the Heap and memory pool, in the MULTI GUI go to the .int file (it can be found under the .gpj dropdown when it is expanded) and double click on it to edit it. Then right-click inside the purple box and go to "Edit". Go to the "Attributes" tab and you can modify the memory pool size and heap size to be larger.
Alternatively you can just edit the .int file in a text editor, but if you want to use the gui to set these follow the above steps.
Also from their manual:
"Check the .bsp file in use. The memory declared with the
MinimumAddress/MaximumAddress keywords must match your board's memory.
If it does not, modify these keywords as needed. If the memory
declared in the .bsp file does match the board, you must modify your
application to use less memory."
Additionally, check the default.ld and you can set the values for the RAM limits there. Look at __INTEGRITY_RamLimit and the other values there. Hope this helps!

With INTEGRITY you are in total control of how much memory is used for each partition. It is a static configuration. Everything, code stack heap you name it, comes out of that. So if you have a bunch of code, automatics, etc in the partition then a memory allocation may fail if you ask for too much. Try increasing the size.

For the first part of the problem Basically I should have modified the "VirtualHeapSize" on the .ld component file.
Second part still try to figure it out.

Related

Visual Studio C++ : same program using different amounts of RAM under different executable names

This might be a general question, but this problem is really getting me confused.
I have two different C++ applications, compiled with Visual Studio 2012, needing an instance of the same object. I have put a breakpoint before the creation of each object to measure the RAM usage by stepping my programs. The first one takes approximately 2.5 MiB more RAM after creating the object, while the second app is taking 30 MiB!
Both objects are created using a simple call to new with the same parameters. The code behind the constructors is the same.
As a detail: the first project contains much fewer .cpp files than the second one. So I thought it might be a problem of internal fragmentation of the exe. Plus, I've also tried to break BEFORE any code was executed inside the main function, and the memory usage was already much different (6 MiB for the first app, 35 MiB for the second one).
Does anyone have any idea what might be going on?
EDIT : The said object is a DirectX context, with a constructor creating a Direct3D instance and a device. Both the instance AND device are created the same way, but both have different RAM usage between the two apps.
Here is the code for the D3D device creation :
d3d_presentParams = new D3DPRESENT_PARAMETERS; ZeroMemory( d3d_presentParams, sizeof(D3DPRESENT_PARAMETERS) );
d3d_presentParams->Windowed = !window->isFullscreen();
d3d_presentParams->SwapEffect = D3DSWAPEFFECT_DISCARD;
d3d_presentParams->hDeviceWindow = window->getHwnd();
d3d_presentParams->MultiSampleType = antiAlias;
d3d_presentParams->EnableAutoDepthStencil = true;
d3d_presentParams->AutoDepthStencilFormat = D3DFMT_D32F_LOCKABLE;
d3d_presentParams->PresentationInterval = (info.m_vsync ? D3DPRESENT_INTERVAL_ONE : D3DPRESENT_INTERVAL_IMMEDIATE);
{
d3d_presentParams->BackBufferCount = 1;
d3d_presentParams->BackBufferFormat = D3DFMT_A8R8G8B8;
d3d_presentParams->BackBufferWidth = m_viewportSize.x;
d3d_presentParams->BackBufferHeight = m_viewportSize.y;
}
d3d->CreateDevice(
D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
window->getHwnd(),
D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED,
d3d_presentParams,
&d3d_device);
EDIT 2 : Problem has been "solved" for now. See the answer below.
Alright, I "solved" it, kind of. You're not going to believe what the problem was.
TL;DR : Renaming the executable lowers the RAM usage from 80 MiB to 28 MiB. It seems that Windows is suspicious of non-verified applications, as I discovered a directory full of logs inside C:/Users/<Me>/AppVerifierLogs/.
It seemed that RAM usage was linked to how many sources files it contained. Thus, I tried copying one of my projects entirely. I had to rename it, because a solution cannot have two projects with the same name... And guess what, memory usage was down to 7.3 MiB at the beginning of main(), instead of 30 MiB previously.
Later, I found out that it was a specific executable name that caused it to use more memory. I simply renamed the .exe file, and the problem was gone. I even tried to rename Firefox's exe by my app's name, and it made it crash.
And finally, I discovered in C:/Users/<me>/AppVerifierLogs/ at least 3000 files named after my application. This immediately made me think about the way Windows tries to verify applications (e.g. in order to see if they're not viruses).
I have no idea where this is coming from, nor how to turn off this verfier, but it's really annoying as a dev to have Windows panicking about an application you're developping, and injecting (possibly useless) stuff inside it.
The fix for now is just to rename the executable. If anyone knows about which application might have access to this "AppVerifierLogs" directory, please mention it, because even Google searches dont't seem to be able to help at this point...

C++ application: clear commandline arguments?

I have a C++ application which uses CCommandLineInfo to parse command line arguments.
One of these arguments is a password which we encrypt in memory with CryptProtectMemory after the application starts.
At that point I want to get rid of the password which is still in plain text available in memory (when I create a memory dump it can be retrieved).
Is there a way to clear the command line arguments? I tried clearing (overwriting with empty strings) __argv but the arguments were still visible in the memory dump.
[edit]
I tried clearing the commandline arguments like this, but that didn't work.
The arguments are still in memory.
for (int i=0; i<__argc; i++)
__argv[i] = "----------------------";
TCHAR* cmdLine = GetCommandLine();
SecureZeroMemory(cmdLine, strlen(cmdLine));
There is a well-known trick/hack to clear the command line from the process memory (see this answer), but even if you apply it you can still easily fetch the command line from e.g. Process Explorer since it makes a copy of it when the process is started. Thus, there is no way to prevent a tool like this from showing the command line.
Having a password as a command line parameter is simply a no-no. The only solution I can think of is to store the password encrypted/hashed (or worst case; unencrypted) in a file and then load that file as a parameter.
I'm afraid cleaning up argv is not enough, as the source of argv is still available using GetCommandLine(). Ultimately this information is stored in RTL_USER_PROCESS_PARAMETERS in Process Environment Block. C runtime will cache this information to argv. Some other library may cache this information too.
You'd better pass your sensitive data with other IPC - shared memory or pipe. Then you need to clean only your memory.
If you still want to locate original command line, here's approximate direction: NtCurrentTeb() to get TEB, then there would be pointer to PEB, and there would be pointer to RTL_USER_PROCESS_PARAMETERS, which finally contains pointer to command-line.

Can a process be limited on how much physical memory it uses?

I'm working on an application, which has the tendency to use excessive amounts of memory, so I'd like to reduce this.
I know this is possible for a Java program, by adding a Maximum heap size parameter during startup of the Java program (e.g. java.exe ... -Xmx4g), but here I'm dealing with an executable on a Windows-10 system, so this is not applicable.
The title of this post refers to this URL, which mentions a way to do this, but which also states:
Maximum Working Set. Indicates the maximum amount of working set assigned to the process. However, this number is ignored by Windows unless a hard limit has been configured for the process by a resource management application.
Meanwhile I can confirm that the following lines of code indeed don't have any impact on the memory usage of my program:
HANDLE h_jobObject = CreateJobObject(NULL, L"Jobobject");
if (!AssignProcessToJobObject(h_jobObject, OpenProcess(PROCESS_ALL_ACCESS, FALSE, GetCurrentProcessId())))
{
throw "COULD NOT ASSIGN SELF TO JOB OBJECT!:";
}
JOBOBJECT_EXTENDED_LIMIT_INFORMATION tagJobBase = { 0 };
tagJobBase.BasicLimitInformation.MaximumWorkingSetSize = 1; // far too small, just to see what happens
BOOL bSuc = SetInformationJobObject(h_jobObject, JobObjectExtendedLimitInformation, (LPVOID)&tagJobBase, sizeof(tagJobBase));
=> bSuc is true, or is there anything else I should expect?
In top of this, the mentioned tools (resource managed applications, like Hyper-V) seem not to work on my Windows-10 system.
Next to this, there seems to be another post about this subject "Is there any way to force the WorkingSet of a process to be 1GB in C++?", but here the results seem to be negative too.
For a good understanding: I'm working in C++, so the solution, proposed in this URL are not applicable.
So now I'm stuck with the simple question: is there a way, implementable in C++, to limit the memory usage of the current process, running on Windows-10?
Does anybody have an idea?
Thanks in advance

Visual C++6 MFC MapViewOfFile returns error code 8

I have a program that is creating a map file, its able to do that call just fine, m_hMap = CreateFileMapping(m_hFile,0,dwProtect,0,m_dwMapSize,NULL);, but when the subsequent function call to MapViewOfFile(m_hMap,dwViewAccess,0,0,0), I get an error code of 8, which is ERROR_NOT_ENOUGH_MEMORY, or error string "error Not enough storage is available to process this command".
So I'm not totally understanding what the MapViewOfFile does for me, and how to fix the situation.
some numbers...
m_dwMapSize = 453427200
dwProtect = PAGE_READWRITE;
dwViewAccess = FILE_MAP_ALL_ACCESS;
I think my page size is 65536
In case of very large file and to read it, it is recommended to read it in small pieces and then process each piece. And MapViewOfFile function is used to map a piece in memory.
Look at http://msdn.microsoft.com/en-us/library/windows/desktop/aa366761(v=vs.85).aspx need offset to do its job properly i.e. in case you want to read a very large file in pieces. Mostly due to fragmentation and related reason very large memory request fails.
If you are working on a 64 bit processor then the system will allocate a total of 4GB memory with bit set LargeaddressAware.
go to Configuration properties->linker->system. in Enable largeaddressware: check
Yes /LARGEADDRESSAWARE and check.

VC2008 C++ Memory Leaks

please note my english skill is very low. but i'll try my best to explain.
I making a mfc project in Visual Studio 2008 sp1.
this Project included a static library that maked by 2008/sp1/native C++
the problem is that step:
1) build and start debug on mfc project
2) click x button on main window or alt+f4 to exit program
3) the main window is closed at once
4) but when i looking process tab of taskmgr, it still alive.
5) if i try kill mfc project process on taskmgr, it killed at once
6) but visual studio still debugging mode and very long time taken to the IDE is returnning normal.
7) the time is 5~10 minutes
8) and output log, detected memory leaks!!
9) the log is very large almost 11megabytes text
and i find the point.
1) the static library always create a library's main functional class's instance on start-up, using new operator (the start-up is static time, front of main)
2) static library's constructor has next code
note : i'm sorry i try to looking the 'Code' tab in this editor but i can't make code section so i write the code and ordering "br" html tag.
VPHYSICS::VPHYSICS(){
m_tickflowed = 0;
QueryPerformanceFrequency(&cpu_freq);
SetTickTime(30);
m_state[VPHYSTATE_SPEED_MAX]=SPEED_SCALAR_MAX;
m_state[VPHYSTATE_LIMITED_ACCELARATION]=FALSE;
m_state[VPHYSTATE_FRICTIONENABLE]=TRUE;
m_state[VPHYSTATE_FRICTIONFACTOR]=1.0f;
m_state[VPHYSTATE_GRAVITY]=9.8065f;
m_state[VPHYSTATE_ENGINESPEED_DELAY_HIGH]=0.0f;
m_state[VPHYSTATE_ENGINESPEED_DELAY_LOW]=0.0f;
m_state[VPHYSTATE_FRICTION_RATIO]=1.0f;
m_state[VPHYSTATE_DIMENSION_GLOBAL]=2;
m_state[VPHYSTATE_COLLISION_UNFRICTIONABLE]=TRUE;
m_state[VPHYSTATE_PAULI_EXCLUSION_ENABLE]=TRUE;
m_state[VPHYSTATE_PAULI_EXCLUSION_RATIO]=1.0f;
m_state[VPHYSTATE_FRICTION_SMOOTHLY]=1.0f;
m_state[VPHYSTATE_COLLHANDLER_OUTER]=TRUE;
m_dwSuspendedCount=0;
InitializeCriticalSection(&m_criRegister);
InitializeCriticalSection(&cri_out);
ZeroMemory(m_objs,sizeof(m_objs));
m_bThreadDestroy=FALSE;
m_hPhysicalHandle=0;
m_nPhysicalThread1ID=0;
m_nTimeMomentTotalCount=0;
m_hGarbageCollector=0;
m_nGarbageCollectorID=0;
m_PhyProcessIterID=NULL;
for(DWORD i = 1 ; i < MAX_OBJECT_NUMBER ; i++)
{
m_objAvaliable.push_back(i);
}
}
//this code is my static library, using Physics Engine of Game.
and the problem is when destroying this instace.
when the delete operator calling(at end of program), it takes very long time.
when i remove the
for(DWORD i = 1 ; i < MAX_OBJECT_NUMBER ; i++)
{
m_objAvaliable.push_back(i);
}
, or decrease MAX_OBJECT_NUMBER(originally it was #define MAX_OBJECT_NUMBER 100000, but i decrease it to 5 or 10), the 'long time' is disappeared!!
the type of 'm_objAvaliable' is std::list<DWORD>
this member variable seems not causing of memory leak. (because this container don't have any relation of heap allocation)
and the other project including this library don't has this problem.
(but included by mfc project is first time and i can see only this problem in this case)
Does anyone imagine a solution that problem???
if you want more information, comment to this article. i'll reply ASAP
more : it only happen in DEBUG mode. on Release mode, this problem don't occur.
I believe the problem you are experiencing is not, in fact, a problem at all. MFC uses it's own debug version of new (in Release mode, it uses the regular, default new). MFC does this so it can try and be helpful in telling you that you have memory leaks.
Trouble is, I think that you deallocation of the objects in the static library is occurring after MFC has decided to dump any allocations it thinks haven't been properly deallocated. Given that you have so many objects, It's spending an awfully long time dumping this stuff to the console.
At the end of the day, MFC thinks there are memory leaks when there aren't.
Your solutions are:
Stop MFC using DEBUG NEW. Remove any lines in your MFC project that are #define new DEBUG_NEW. The disadvantage to this method is that, of course, if you inadvertently create real memory leaks, it can't track them.
Have some kind of initialisation, de-initialisation functions in your static library. Call the de-initialisation function when your MFC application exits, prior to when MFC starts to trawl through allocations it still thinks exist.