I am building a library for integration in fortran.I have used dynamics arrays in that. The problem is, whenever I try to integrate with a very small stepsize, naturally system gets out of RAM soon, and gfortran gives error that out of memory.
Now, what I wish is to dynamically monitor the ram usage inside the library modules, and insert a stop command inside every suberoutine, that, whenever the ram usage, say goes to 4/5th of total RAM, the process will stop and give an error message, that use a timestep within some specified limit, which shall be obtained from the range of integration. Is there any way to monitor ram usage in a fortran program,and also getting info on total available system RAM within the program, which would be used to stop the program?
Related
I'm using CentOS 7 and I'm running a C++ Application. Recently I switched to a newer version of a library which the application was using for various MySQL C API functions. But after integrating the new library, I saw a tremendous increase in memory usage of the program i.e. the application crashes if left running for more than a day or two. Precisely, what happens is the memory usage for the application starts increasing upto a point where the application alone is using 74.9% of total memory of the system and then it is forcefully shut down by the system.
Is there any way of how to track memory usage of the whole application including the static variables as well. I've already tried valgrind's tool Massif.
Can anyone tell me what could be the possible reasons for the increased memory usage or any tools that can give me a deep insight of how the memory is being allocated (both static and dynamic). Is there any tool which can tell us about Memory Allocation for a C++ Application running in a linux environment?
Thanks in advance!
Static memory is allocate when the program starts. Are you seeing memory growth or a startup increase?
Since it takes 'a day or two to crash', the trouble is likely a memory leak or unbounded growth of a data structure. Valgrind should be able to help with both. If valgrind shows a big leak with the --leak-check-full option then you will likely have found the issue.
To check for unbounded growth, put a preemptive _exit() in the program at a point where you suspect the heap has grown. For example, put a timer on the main loop and have the program _exit after 10 minutes. If the valgrind shows a big 'in use at exit' then you likely have unbounded growth of a data structure but not a leak. Massif can help track this down. The ms_print gives details of allocations with function stack.
If you find an issue, try switching back to the older version of your library. If the problem goes away, check and make sure you are using the API properly in the new version. If you don't have the source code then you are a bit stuck in terms of a fix.
If you want to go the extra mile, you can write a shared library interposer for malloc/free to see what is happening. Here is a good start. Linux has the backtrace functionality that can help with determining the exact stack.
Finally, if you must use the 3rd party library and find the heap growing without bound or leaking then you can use the shared library interposer to directly call free/delete. This is a risky last-ditch unrecommended strategy but I've used in production to limp a process along.
I have a program that runs a specific task over several files, so effectively it has a structure as follow:
std::vector<string> fileList=getFileList();
for(int i=0;i<fileList.size();i++)
{
processFile(fileList[i]);
}
I am worried that there is a small memory leak in processFile function (this is relatively complex function which calls several other functions and uses several classes).
To check if I really have a memory leak in this function, like to measure the amount of memory that my application uses before and after the call to processFile and run it over a very large set of data and see how memory usage changes during processing.
Is there any way that I can measure the amount of memory that my application is using inside that application?
In the same way, can I find the amount of memory that each part of my application is using during run time?
if you want measure how much memory is using via above code at runtime u must calculate this:
(# of element in fileList) * sizeof(string) + size of files in fileList
but if you want get all of your process memory usage in windows at runtime you can call GetProcessMemoryInfo API in your program and pass your process handle(GetCurrentProcess() API) to it,sample usage of this API:
https://msdn.microsoft.com/en-us/library/ms682050.aspx
see complete response to your question via this link:
How to determine CPU and memory consumption from inside a process?
A common solution to detecting memory leaks and monitoring memory usage would be to run your program with valgrind.
I am analysing high memory consumption problem in our software. I have a core file corresponding to this high memory consumption(this core file is generated by killing our application which generates core file). But I am not able to view the actual memory consumption using this core file. I used Totalview and gdb...using these two I am not getting a snapshot of the total memory consumed by my process and which library is eating up all the memory.
This memory consumption is hitting us over 10 to 20 days of time and hence I am trying to find out what has caused this high memory consumption.
Can valgrind help me in analysing this core file?
Any input/suggestion is highly appreciated.
#suresh,
Hi, I'm the product manager for TotalView at Rogue Wave Software.
Can you describe the scenario a bit more? Is the program running along with "normal memory consumption" for a long time and then suddenly the memory consumption goes through the roof? Or is the program slowly and steadily consuming memory till it exhausts available resources?
When it is crashing is it crashing because it literally runs out of memory or are you killing it because it has started swapping and being unresponsive?
In general I'd recommend running it under MemoryScape (in TotalView or the Standalone version) and when it starts to show unexpected memory consumption you want to pause it and run a memory leak report. It is likely that this will point right to the problem.
It is possible that the memory use isn't a "classical" leak because you still have pointers referencing the data -- but you are simply over-allocating. In this case you won't see anything on the leak report, but you may be able to pick out which allocation is "gone bad" by watching which allocations are growing. There are a number of ways to do this.
Pause periodically and note how the heap usage breaks down in the Heap Status Source Code report. For example you may note that the number of allocations associated with a specific source code file just keeps increasing.
If you are using TotalView you can use the "set heap baseline" functionality when the program seems be behaving well then filter against this baseline. Again you may want to use the source code report (though the graphical or backtrace reports support filtering too).
Or you can use the "export memory data" feature to store an image of what the "normal" heap status is. This creates a binary heap status file. Then let the program run till you get into the state where your program has high memory consumption. At that point you pause your live app, load the stored heap data file and you can do a comparison.
Wow, this is getting long. One final thought. You said you are getting core files. Under the debugger you should get a chance to examine the running program before it gets cleaned up. If this doesn't happen let me know. If you really want to work via core files (for example, this is happening in a production environment and you don't want to run the debugger there) let us know -- there are techniques where we can instrument the application using the HIA and then enable you to do offline analysis of your heap status.
Good luck!
Chris Gotbrath
Principal Product Manager for TotalView and ThreadSpotter, Rogue Wave Software
email: first . last at roguewave . com
I'm developing a c++ code on Linux which can run out of memory, go into swap and slow down significantly, and sometimes crash. I'd like to prevent this from happening by allowing the user to specify a limit on the proportion of the total system memory that the process can use up. If the program should exceed this limit, then the code could output some intermediate results, and terminate cleanly.
I can determine how much memory is being used by reading the resident set size from /proc/self/stat. I can then sum this up across all the parallel processes to give me a total memory usage for the program.
The total system memory available can be obtained via a call to sysconf(_SC_PHYS_PAGES) (see How to get available memory C++/g++?). However, if I'm running on a parallel cluster, then presumably this figure will only give me the total memory for the current cluster node. I may, for example, be running 48 processes across 4 cluster nodes (each with 12 cores).
So, my real question is how do I find out which processor a given process is running on? I could then sum up the memory used by processes running on the same cluster node, and compare this with the available memory on that node, and terminate the program if this exceeds a specified percentage on any of the nodes that the program is running on. I would use sched_getcpu() for this, but unfortunately I'm compiling and running on a system with version 2.5 of glibc, and sched_getcpu() was only introduced in glibc 2.6. Also, since the cluster is using on old linux OS (version 2.6.18), I can't use syscall() to call getcpu() either! Is there any other way to get the processor number, or any sort of identifier for the processor, so that I can sum memory used across each processor separately?
Or is there a better way to solve the problem? I'm open to suggestions.
A competently-run cluster will put your jobs under some form of resource limits (RLIMIT_AS or cgroups). You can do this yourself just by calling setrlimit(RLIMIT_AS,...). I think you're overcomplicating things by worrying about sysconf, since on a shared cluster, there's no reason to think that your code should be using even a fixed fraction of the physical memory size. Instead, you should choose a sensible memory requirement (if your cluster doesn't already provide one - most schedulers do memory scheduling reasonably well.) Even if you insist on doing it yourself, auto-sizing, you don't need to know about which cores are in use: just figure out how many copies of your process are on the node, and divide appropriately. (you will need to figure out which node (host) each process is running on, of course.)
It's worth pointing out that the kernel RLIMIT_AS, not RLIMIT_RSS. And that when you hit this limit, new allocations will fail.
Finally, I question the design of a program which uses unbounded memory. Are you sure there's not a better algorithm? Users are going to find your program pretty irritating to use if, after investing significant time in a computation, it decides it's going to try allocating more, then fails. Sometimes people ask this sort of question when they are mistakenly thinking that allocating as much memory as possible will give them better IO buffering (which is naive wrt pagecache, etc).
How would i find out the amount of RAM and details about my system like CPU type, speed, amount of physical memory available. amount of stack and heap memory in RAM, number of processes running.
Also how to determine if there is any way to determin how long it takes your computer to execute an instruction, fetch a word from memory (with and without a cache miss), read consecutive words from disk, and seek to a new location on disk.
Edit: I want to accomplish this on my linux system using g++ compiler. are there any inbulit functions for this..? Also tell me if such things are possible on windows system.
I just got this question out of curiosity when I was learning some memory management stuff in c++. Please guide me through this step by step or may be online tutorials ll do great. Thanks.
With Linux and GCC, you can use the sysconf function included using the <unistd.h> header.
There are various arguments you can pass to get hardware information. For example, to get the amount of physical RAM in your machine you would need to do:
sysconf(_SC_PHYS_PAGES) * sysconf(_SC_PAGESIZE);
See the man page for all possible usages.
You can get the maximum stack size of a process using the getrlimit system call along with the RLIMIT_STACK argument, included using the <sys/resource.h> header.
To find out how many processes are running on the current machine you can check the /proc directory. Each running process is represented as a file in this directory named by its process ID number.
For Windows - GetPhysicallyInstalledSystemMemory for installed RAM, GetSystemInfo for CPUs, Process Status API for process enumeration. Heap and stack usage can be gotten only by the local process for itself. Remember stack usage is per-thread, and in Windows a process can have multiple heaps (use GetProcessHeaps to enumerate them). Memory usage per process in externally visible usage can be retrieved for each process using GetProcessMemoryInfo.
I'm not aware of Win32 APIs for the second paragraph's list. Probably have to do this at the device driver level (kernel mode) I would think, if it's even possible. Instruction fetch and
execution depend on the processor, cache size and instruction itself (they are not all the same in complexity). Memory access speed will depend on RAM, CPU and the motherboard FSB speed. Disk access likewise is totally dependent on the system characteristics.
On Windows Vista and Windows 7, the Windows System Assessment Tool can provide a lot of info. Supposedly it can be programmatically accessed via the WEI API.