Understanding memory used by a program in bash (in ubuntu linux) - c++

In some programming contests, problems have a memory limit (like 64MB or 256MB). How can I understand the memory used by my program (written in C++) with bash commands? Is there any way to limit the memory used by the program? The program should terminate if it uses more memory than the limit.

The command top will give you a list of all running processes and the current memory and swap or if you prefer the GUI you can use the System Monitor Application.
As for locking down memory usage you can always use the ulimit -v to set the maximum virtual address range for a process. This will cause malloc and its buddies to fail if they try to get more memory than that set limit.

Depending on how much work you want to put into it you can look at getrusage(), getrlimit(), and setrlimit(). For testing purposes you can call them at the beginning of your program or perhaps set them up in a parent process and fork your contest program off as a child. Then dispense with them when you submit your program for contest consideration.

Also, for process 1234, you could look into /proc/1234/maps or /proc/1234/smaps or run pmap 1234, all these commands display the memory map of that process of pid 1234.
Try to run cat /proc/self/maps to get an example (the memory map of the process running that cat command).
The memory map of a process is initialized by execve(2) and changed by the mmap(2) syscall (etc...)

Related

Analyze Glibc heap memory

I research an embedded device that use GLIBC 2.25.
When I look at /proc/PID/maps I see under the heap section some anonymous sections ,I understand that sections create when the process use new
I dump those sections with dd and there is there interesting value that I want to understand is that buffer allocated or free, and what is the size of this buffer.
How can I do that please?
You can use the gdb (GNU Debugger) tool to inspect the memory of a running process. You can attach to the process using its PID and use the x command to examine memory at a specific address. You can also use the info proc mapping command to view the memory maps of the process, including the size of the heap. Additionally, you can use the heap command to list heap blocks and the malloc_info command to show detailed information about heap blocks.
You can also use the malloc_stats function to display information about the heap usage such as the number of bytes allocated, the number of bytes free and the number of bytes in use.
You can also use the pmap command to display the memory map of a process, including the heap size. This command is available on some systems and may not be present on others.
It's also worth noting that the /proc/PID/maps file can also give you an idea about the heap section of a process.
Please keep in mind that you need to have the right permission to access the process you want to inspect.
Instead of analyzing the memory from proc, you may want to try following options, limited to your env.
use tools like valgrind if you suspect any kind of leaks or invalid read/writes.
rather than looking at output of dd, attach to running process and inspect memory within process, gives you context to make sense of memory usage.
use logging to dump addresses of allocation/free/read/write. This allows you to build better understanding of memory usage.
You may have to use all of the above options depending upon the complexity of your task.

How to get memory information on Linux system?

How to get the total memory, used memory, free memory from C++ code on Linux system?
Run your program through valgrind. For a program called foo, for example:
valgrind foo
It'll run the program in a harness that keeps track of memory use and print out that information after the program terminates.
If you don't have valgrind installed for some reason, you should be able to find it in your distro's package repository.
As commented by Chris Stratton, you can -on Linux- query a lot of system information in /proc/ so read proc(5); which contain textual pseudo-files (a bit like pipes) to be read sequentially. These are not real disk files so are read very quickly. You'll need to open and close them at every measurement.
From inside a process, you can query its address space in virtual memory using /proc/self/maps -and /proc/self/smaps; outside of that process, for another process of pid 1234, use /proc/1234/maps & /proc/1234/smaps; you can get system wide memory information thru /proc/meminfo
So try the following commands in a terminal:
cat /proc/meminfo
cat /proc/$$/maps
cat /proc/$$/smaps
cat /proc/self/maps
to understand more what they could give you.
Be aware that malloc and free (used by new and delete) are often requesting space (from the kernel) using syscalls like mmap(2) but are managing previously free-d memory to reuse it, so often don't release memory back to the kernel with munmap. Read wikipage on C memory management. In other words, the used heap is not accessible outside the process (since some unused, but reusable for future malloc-s, memory remains mmap-ed). See also mallinfo(3) and malloc_stats(3).
As Justin Lardinois answered, use valgrind to detect memory leaks.
Advanced Linux Programming is a good book to read. It has several related chapters.

C++ Multithread program on linux memory issue

I'm developing a software that requires creation and deletion of a large number of threads.
When I create a thread the memory increases and when delete them (this is confirmed by using the command ps -mo THREAD -p <pid>), the memory related to the program/software does not decrease (top command). As a result I run out of memory.
I have used Valgrind to check for memory error/leak and I can't find any. This is on a debian box. Please let me know what the issue could be.
How are you deleting the threads?
The notes here http://www.kernel.org/doc/man-pages/online/pages/man3/pthread_join.3.html talk about needing to call join in some cases to free up resources.
You do not run out of memory.
The "free memory" you see in top command is actually not the memory that is available when required. Linux kernel uses as much as possible/useable of the free memory for its page cache. When a process requires memory, the kernel can throw away that page cache and provide that memory to a process.
In other words: linux uses the free memory, instead of just leaving it idling around...
Use free -m: In the row labeled "-/+ buffers/cache:" you will see the real amount of memory available for processes.

How do I calculate the total space used in a program?

Let's say I have created a program in C/C++ and have a source code.
I'd like to know the total memory during the program execution.
Someone has mentioned something about "malloc" and "hook"
Is there any other way to trace the spaced used?
If you are running Linux or something Unix-based, you could most likely use Valgrind. Valgrind runs the program and intercepts all of its memory allocations and prints the stats once it exits. It's a very useful tool for checking for memory leaks and memory usage. If you're running Windows, I haven't a clue.
You can monitor memory use with the "top" command in linux or taskmgr in windows.
In linux-like systems, you can use info from
/proc/self
to find out total amount of memory used by your program during the runtime. It also contains many other info about the process, see
man 5 proc
for details.

Limit physical memory per process

I am writing an algorithm to perform some external memory computations, i.e. where your input data does not fit into main memory and you have to consider the I/O complexity.
Since for my tests I do not always want to use real inputs I want to limit the amount of memory available to my process. What I have found is, that I can set the mem kernel parameter to limit the physically used memory of all processes (is that correct?)
Is there a way to do the same, but with a per process limit. I have seen ulimit, but it only limits the virtual memory per process. Any ideas (maybe I can even set it programmatically from within my C++ code)?
You can try with 'cgroups'.
To use them type the following commands, as root.
# mkdir /dev/cgroups
# mount -t cgroup -omemory memory /dev/cgroups
# mkdir /dev/cgroups/test
# echo 10000000 > /dev/cgroups/test/memory.limit_in_bytes
# echo 12000000 > /dev/cgroups/test/memory.memsw.limit_in_bytes
# echo <PID> > /dev/cgroups/test/tasks
Where is the PID of the process you want to add to the cgroup. Note that the limit applies to the sum of all the processes assigned to this cgroup.
From this moment on, the processes are limited to 10MB of physical memory and 12MB of pysical+swap.
There are other tunable parameters in that directory, but the exact list will depend on the kernel version you are using.
You can even make hierarchies of limits, just creating subdirectories.
The cgroup is inherited when you fork/exec, so if you add the shell from where your program is launched to a cgroup it will be assigned automatically.
Note that you can mount the cgroups in any directory you want, not just /dev/cgroups.
I can't provide a direct answer but pertaining to doing such stuff, I usually write my own memory management system so that I can have full control of the memory area and how much I allocate. This is usually appliacble when you're writing for microcontrollers as well. Hope it helps.
I would use the setrlimti with the RLIMIT_AS parameter to set the limit of virtual memory (this is what ulimit does) and then have the process use mlockall(MCL_CURRENT|MCL_FUTURE) to force the kernel to fault in and lock into physical RAM all the process pages, so that amount virtual == amount physical memory for this process
have you considered trying your code in some kind of virtual environment? A virtual machine might be too much for your needs, but something like User-Mode Linux could be a good fit. This runs a linux kernel as a single process inside your regular operating system. Then you can provide a separate mem= kernel setting, as well as a separate swap space to make controlled experiments.
Kernel mem= boot parameter limits how much memory in total OS will use.
This is almost never what user wants.
For physical memory, there is RSS rlimit aka RLIMIT_AS.
As other posters have indicated already, setrlimit is the most probable solution, it controls the limits of all configurable aspects of a process environment. Use this command to see these individual settings on your shell process:
ulimit -a
The ones most pertinent to your scenario in the resulting output are as follows:
data seg size (kbytes, -d) unlimited
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
virtual memory (kbytes, -v) unlimited
Checkout the manual page for setrlimit ("man setrlimit"), it can be invoked programmatically from your C/C++ code. I have used it to good effect in the past for controlling stack size limits. (btw, there is no dedicated man page for ulimit, it's actually an embedded bash command, so it's in the bash man page.)