I'm working on a C++ program and I want to find the total memory used by it in its execution. My operating system is Ubuntu 19.10. I found this as a related question, but it seems to address a much different problem. Any help/directions would be great. Thanks!
You can use the command line tool top to monitor the memory usage of a process. Simply run:
top -p PID
where PID is the process ID of the C++ executable that you want to monitor the memory usage of.
Related
I have a daemon process on which I want to perform a memory profile. So I took valgrind as a choice and ran it using massif tool, but since the process never dies, massif never returns the output file. Even I try to send a TERM signal to the process, I am not receiving any output from massif.
So now I tried installing a plugin of valgrind in my eclipse and started trying to run the profile on an already created binary of my daemon process, but when I start the profiler, it says 2 kinds of errors:
failing saying not able to load a library. I didn't find any way to set the library path in the profile configuration.
failing bad permissions to read a memory address.
So I am not even able to run the profiler in eclipse.
I tried gdb, I tried getting the memory info, but that is what "/proc//maps" would give. So of no use.
Finally here is my use case:
I have a daemon process that never quits and I want to perform memory profiling on it.
I want to get snapshots of no of memory allocations happened, max memory allocations, which instruction is trying to allocate the most number of allocations etc etc.
Better if I could get a visual interface for the memory profiling so that I can even share it with my manager.
So please suggest me is there any such profiler that helps and any pointers to where to get the documentation etc.
Thanks in Advance!
Vinay.
When running your program under valgrind, various commands
(depending on the tool) can be executed from the shell, using
vgdb in standalone mode.
When running with --tool=massif, you can do on demand snapshot, while
your program is running.
See http://www.valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.valgrind-monitor-commands for more information.
I am working on a big code base. It is heavily multithreaded.
After running the linux based application for a few hours, in the end, right before reporting, the application silences. It doesn't die, it doesn't crash, it just waits there. Joins, mutexes, condition variables ... any of these can be the culprit.
If it had crashed, I would at least have a chance to find the source using debugger. But this way, I have no clue how to use what tool to find the bug. I can't even post a code sample for you. The only thing that can possibly help is to tap MANY places with cout to get a visual where the application is.
Have you been in such a situation? What do you recommend?
If you're running under Linux then just use gdb to run the program. When the application 'silences', interrupt it with CTRL+C, then type backtrace to see the call stack. With this you will find out the function where your application was blocked.
Incase of linux, gdb will be great help. Another tool that can be of great help is strace (This can also be used where there are problems with program for with source is not readily available because strace does not need recompilation to trace them.)
strace shall intercept/record system calls that are called by a process and also the signals that are received by a process. It will be able to show the order of events and all the return/resumption paths of calls. This can take you almost closer to the area of problem.
iotop, LTTng and Ftrace are few of other tools that be helpful to you in this scenario.
This is a sequel to my previous question. I am using fork to create child process. Inside child, I am giving command to run a process as follows:
if((childpid=fork())==0)
{
system("./runBinary ");
exit(1)
}
My runBinary has the functionality of measuring how much time it takes from start to finish.
What amazes me is that when I run runBinary directly on command-line, it takes ~60 seconds. However, when I run it as a child process, it takes more, like ~75 or more. Is there something which I can do or am currently doing wrong, which is leading to this?
Thanks for the help in advance.
MORE DETAILS: I am running on linux RHEL server, with 24 cores. I am measuring CPU time. At a time, I only fork 8 child (sequentially), each of which is bound to different core, using taskset (not shown in code). The system is not loaded except for my own program.
The system() function is to invoke the shell. You can do anything inside it, including running a script. This gives you a lot of flexibility, but it comes with a price: you're loading a shell, and then runBinary inside it. Although I don't think loading the shell would be responsible to so much time difference (15 seconds is a lot, after all), since it doesn't seem you need that - just to run the app - try using something from the exec() family instead.
Without profiling the application, if the parent process which forks has a large memory space, you might find that there is time spent attempting to fork the process itself, and attempts to duplicate the memory space.
This isn't a problem in Red Hat Enterprise Linux 6, but was in earlier versions of Red Hat Enterprise Linux 5.
I want to write a program like system monitor.
I want to have a list of programs with their process ID and usage of CPU and RAM.
I know Linux writes this information in the /proc folder but somebody told me that I can use some functions to get it too. For example a program that will return a list like:
name PID RAM
sh 3904 72KIB
And I want to code in C++.
Why don't you look at the source code for top, which displays these and many more process statistics?
Here is the busybox version, which is comparatively short and simple. It gets the information by reading the proc filesystem, that logic is here.
I need to run a linux command such as "df" from my linux daemon to know free space,used space, total size of the parition and other info. I have options like calling system,exec,popen etc..
But as this each command spawn a new process , is this not possible to run the commands in the same process from which it is invoked?
And at the same time as I need to run this command from a linux daemon, as my daemon should not hold any terminal. Will it effect my daemon behavior?
Or is their any C or C++ standard API for getting the mounted paritions information
There is no standard API, as this is an OS-specific concept.
However,
You can parse /proc/mounts (or /etc/mtab) with (non-portable) getmntent/getmntent_r helper functions.
Using information about mounted filesystems, you can get its statistics with statfs.
You may find it useful to explore the i3status program source code: http://code.stapelberg.de/git/i3status/tree/src/print_disk_info.c
To answer your other questions:
But as this each command spawn a new process , is this not possible to run the commands in the same process from which it is invoked?
No; entire 'commands' are self-contained programs that must run in their own process.
Depending upon how often you wish to execute your programs, fork();exec() is not so bad. There's no hard limits beyond which it would be better to gather data yourself vs executing a helper program. Once a minute, you're probably fine executing the commands. Once a second, you're probably better off gathering the data yourself. I'm not sure where the dividing line is.
And at the same time as I need to run this command from a linux daemon, as my daemon should not hold any terminal. Will it effect my daemon behavior?
If the command calls setsid(2), then open(2) on a terminal without including O_NOCTTY, that terminal might become the controlling terminal for that process. But that wouldn't influence your program, because your program already disowned the terminal when becoming a daemon, and as the child process is a session leader, it cannot change your process's controlling terminal.