I have a legacy software with C++ and C# code, which worked in Windows XP SP3 and .NET 2.0 (VS2005). The software did scanning and image processing with plenty of memory intensive processing. The PC has 2gb RAM. The stack size is reserved to 15MB for the software process.
This software was migrated to .NET4 (VS2010). During the migration, code logic is not altered. The software works properly for individual scans and processing. However, for continous job runs the software crashes at random places. For all the crashes, the event viewer shows 'The software was terminated due to stack overflow'. On debugging the crash dump, it points to ntdll.dll (kernel dll).
To fix the issue following solutions were tried. None of them worked.
Stack size increased to 20MB. Software crashed.
Process is allocated 820 MB by VirtualAlloc in the beginning. This was increased to 1024 MB. It delayed the crash by a day. But eventually it crashed.
alloca was used to allocate memory for local variables. These were replaced by _malloca.
Please let me know if .NET 4 migration requires major increase in RAM to run the software without failure. Inputs on memory requirement change for .net 2 to .net 4 migration are welcome.
Related
We've QNX 7.1.0 and our application is written in CPP. We need to find out stack, heap and data segment memory usage of the current thread which is running . Is there any API or QNX command to find it?
Note: We are not launching our application through QNX Momentics IDE. Directly running our application.
Referred: How to get CPU load / RAM usage out of QNX?
but it talks about only CPU load.
Can anyone please help out?
I am using a 3d-lattice to update two fields in time, using OpenCL kernels for the update rule, and a C++ host program, and run my program under Windows 64bit with 8GB RAM. The application is built using VS2017.
My problem is: No matter whether I use my graphics card or the CPU for the computation, the application is paused by Windows after a brief time (about 15min), and I have to press a key in the open console to wake it up, afer which it continues running, but stops outputting status information to the console (which it should do).
This happens only when I use a lot of memory, i.e. compute on a big lattice with at least 3GB of allocated memory, with less memory consumption the program runs just fine for as long as I need it to.
Of course, I would like to be able to run my simulations without having to watch my PC all the time.I already tried increasing the priority of the process, which did not help.
Is there a way to tell Windows to leave my processes running?
We have a C++ based Multi-threaded application on Windows that captures network packets in real-time using the WinPCAP library and then processes these packets for monitoring the network. This application is intended to run 24x7. Our applicatin easily consumes 7-8 GB of RAM.
Issue that we are observing :
Lets say the application is monitoring 100Mbps of network traffic and consumes 60% CPU. We have observed that when the application keeps running for a longer duration like a day or two, the CPU consumption of the application increases to like 70-80%, even though it is still processing 100 Mbps traffic (doing the same amount of work).
We have tried to debug this issue to the thread level using ProcessExplorer and noticed that the packet capturing threads start consuming more CPU over time. This issue is not resolved even after re-starting the application. Only a machine restart solves the problem.
We have observed this issue is easily reproducible on Windows 2012 R2 Server OS during over night runs. In Windows 7, the issue happens but over few days.
Any idea what might be causing this ?
Thanks in Advance
What about memory allocation? Because you are using lots of memory it could be a memory fregmentation problem so if you do several allocation/reallocation of buffers this of course will cause a major cost for the processor to find and allocate space available.
I finally found the reason for the above behavior : it was the winpcap code that was causing it. After replacing that, we did not observe this behavior.
Question:
My question is what will be the impact on my application memory footprint or performance if I replace functions like foo1 (which I have in my code) below with foo2. This function is called frequently in application.
#define SIZE 5000
void foo1()
{
double data[SIZE];
// ....
}
void foo2()
{
std::unique_ptr< double[] > data( new double[SIZE] );
// ....
}
Context:
My MFC application loads really slow on the embedded device running Windows 7 after implementation of new features/modules. The same application loads fast on PC. At least one of the difference and what I suspect is the cause is RAM on embedded unit is really low, just 768 MB.
I debugged it to find out where does this delay occurs and recorded time stamps within application in loading process. What I discovered was interesting. When I double click the exe, it takes about a minute to record the first time stamp and after that it runs fast, so all the delay is right there.
My theory is that windows is taking all this time to setup the environment for exe and once done, it runs fast. The reason I suspect this is there are a lot big structures declared on stack in the application to the point I had to move some of them to heap to get rid of stack overflow errors even on PC with new features.
What do you think is the cause of the slow or more accurately delayed loading of executable on low RAM machine? Do you think it will fix up if I move all of the big structures from stack to heap?
There are not a lot of things that take a minute in modern day computing. Not on a machine with an embedded version of Windows either. Not the processor, not the RAM, not the disk.
Except one, networking is still based on assumptions that were last valid in the 1980s. TCP/IP has taken over as the only protocol in common use. But has a flaw, there is no reasonable way to discover how long a connection attempt might take. So connection timeouts are based on absolute worst-case conditions, trying to hook up to a machine half-way around the world, connected with a modem that needs to spin up the drum to load the program.
The minimum timeout on Windows is hard-baked at 45 seconds. And, in general, a condition that certainly isn't unlikely in an embedded machine. You might have hooked it up to a network to get it initialized but it isn't connected anymore or the machine you copied from might no longer be powered up.
Chase it down by first looking for a disconnected disk drive, very common. Next use SysInternals' utilities like TcpView to look for network activity, like trying to connect to a CRL server. Use Process Explorer to find out where the program is stuck. Mark Russinovich' blog is excellent to show his trouble-shooting strategies using these tools. Good luck with it.
I'm creating a game in OpenGL which loads the entire Arial Unicode MS font when it loads. The program uses on avg. 10 megs of memory on my computer (op sys is WinXP SP2) and runs without problems, but when I move the program to my laptop (with Vista) the wglUseFontBitmaps hangs and allocates memory fluently and never returns. This problem occured recently and I have no idea why and never had such problem before. Why does wglUseFontBitmaps do this and how to fix it?
update: I tried an older version and it runs but eats 400megs of memory (so it is not a new problem)
How many glyph display lists are you trying to generate with wglUseFontBitmaps()? Can you show us your invocation? Perhaps Vista is trying to do all 60000-some-odd glyphs in one go, and XP is doing some sort of on-demand construction?
I've had good luck with FreeType2 and MS Arial Unicode, though it does take some time to get up to speed with the API. This tutorial can be C++-ized to great effect.