I am using OpenSuse 11.0. The system got hung and i have to do hard reboot.After investigating the logs i got following error:-
Mem-info:
kernel: Node 0 DMA per-cpu:
kernel: CPU 0: hi: 0, btch: 1 usd: 0
kernel: Node 0 DMA32 per-cpu:
kernel: CPU 0: hi: 186, btch: 31 usd: 174
kernel: Active:229577 inactive:546 dirty:0 writeback:0 unstable:0
kernel: free:1982 slab:5674 mapped:18 pagetables:10359 bounce:0
kernel: Node 0 DMA free:4000kB min:32kB low:40kB high:48kB active:2800kB inactive:2184kB present:8860kB pages_scanned:9859 all_unreclaimable? yes
kernel: lowmem_reserve[]: 0 994 994 994
kernel: Node 0 DMA32 free:3928kB min:4016kB low:5020kB high:6024kB active:915508kB inactive:0kB present:1018016kB pages_scanned:2233186 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 0
Node 0 DMA: 0*4kB 8*8kB 6*16kB 2*32kB 3*64kB 6*128kB 1*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 4000kB
Node 0 DMA32: 4*4kB 9*8kB 0*16kB 4*32kB 2*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3928kB
19418 total pagecache pages
Swap cache: add 36342342, delete 36342340, find 14356263/18138459
Free swap = 0kB
Total swap = 771080kB
Free swap: 0kB
262144 pages of RAM
kernel: 5430 reserved pages
It is something related to memory leakage but not sure.
If anybody is having solution of similar issue please let me know..
Thanks in advance
Rajiv
I suggest doing something like running "while true ; do sleep 10 ; ps auxw >> ~/processes ; done". Then you can probably spot the memory hog that chewed through 700 megs of swap after your system comes back up by reading through the file and spotting the growing program.
When you find the program that is eating all your memory, you can use rlimits (man bash, search for 'ulimit') to limit how much memory that program uses before you start it, and maybe keep your system a little more sane.
Related
We have build Ejabberd in AWS EC2 instance and have enabled the clustering in the 6 Ejabberd servers in Tokyo, Frankfurt, and Singapore regions.
The OS, middleware, applications and settings for each EC2 instance are exactly the same.
But currently, the Ejabberd CPUs in the Frankfurt and Singapore regions are overloaded.
The CPU of Ejabberd in the Japan region is normal.
Could you please tell me the suspicious part?
You can take a look at the ejabberd log files of the problematic (and the good) nodes, maybe you find some clue.
You can use the undocumented "ejabberdctl etop" shell command in the problematic nodes. It's similar to "top", but runs inside the erlang virtual machine that runs ejabberd
ejabberdctl etop
========================================================================================
ejabberd#localhost 16:00:12
Load: cpu 0 Memory: total 44174 binary 1320
procs 277 processes 5667 code 20489
runq 1 atom 984 ets 5467
Pid Name or Initial Func Time Reds Memory MsgQ Current Function
----------------------------------------------------------------------------------------
<9135.1252.0> caps_requests_cache 2393 1 2816 0 gen_server:loop/7
<9135.932.0> mnesia_recover 480 39 2816 0 gen_server:loop/7
<9135.1118.0> dets:init/2 71 2 5944 0 dets:open_file_loop2
<9135.6.0> prim_file:start/0 63 1 2608 0 prim_file:helper_loo
<9135.1164.0> dets:init/2 56 2 4072 0 dets:open_file_loop2
<9135.818.0> disk_log:init/2 49 2 5984 0 disk_log:loop/1
<9135.1038.0> ejabberd_listener:in 31 2 2840 0 prim_inet:accept0/3
<9135.1213.0> dets:init/2 31 2 5944 0 dets:open_file_loop2
<9135.1255.0> dets:init/2 30 2 5944 0 dets:open_file_loop2
<9135.0.0> init 28 1 3912 0 init:loop/1
========================================================================================
I'm using GCP's Cloud Notebook VM's. I have a 200+ gb RAM VM running and am attempting to download about 70gb of data from BigQuery into memory using the bigquery storage engine.
Once it gets to around 50gb the kernel crashes --
Tailing the logs, sudo tail -20 /var/log/syslog - here's what I find:
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.550367] Task in /system.slice/jupyter.service killed as a result of limit of /system.slice/jupyter.service
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.563843] memory: usage 53350876kB, limit 53350964kB, failcnt 1708893
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.570582] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.578694] kmem: usage 110900kB, limit 9007199254740988kB, failcnt 0
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.585267] Memory cgroup stats for /system.slice/jupyter.service: cache:752KB rss:53239292KB rss_huge:0KB mapped_file:60KB dirty:0KB writeback:0KB inactive_anon:0KB active_anon:53239292KB inactive_file:400KB active_file:248KB unevictable:0KB
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.612963] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.621645] [ 787] 1003 787 99396 17005 63 3 0 0 jupyter-lab
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.632295] [ 2290] 1003 2290 4996 966 14 3 0 0 bash
Dec 2 13:35:57 pytorch-20200908-152245 kernel: [60783.642309] [13143] 1003 13143 1272679 26639 156 6 0 0 python
Dec 2 13:35:58 pytorch-20200908-152245 kernel: [60783.652528] [ 5833] 1003 5833 16000467 13268794 26214 61 0 0 python
Dec 2 13:35:58 pytorch-20200908-152245 kernel: [60783.661384] [ 6813] 1003 6813 4996 936 14 3 0 0 bash
Dec 2 13:35:58 pytorch-20200908-152245 kernel: [60783.670033] Memory cgroup out of memory: Kill process 5833 (python) score 996 or sacrifice child
Dec 2 13:35:58 pytorch-20200908-152245 kernel: [60783.680823] Killed process 5833 (python) total-vm:64001868kB, anon-rss:53072876kB, file-rss:4632kB, shmem-rss:0kB
Dec 2 13:38:07 pytorch-20200908-152245 sync_gcs_service.sh[806]: GCS bucket is not specified in GCE metadata, skip GCS sync
Dec 2 13:39:03 pytorch-20200908-152245 bash[787]: [I 13:39:03.463 LabApp] Saving file at /outlog.txt
I followed this guidance and allocated 100gb of RAM How to increase Jupyter notebook Memory limit? but it's still crashing at around 55gb. e.g., 53350964kB is the limit in the logs.
How can I utilize the available memory of my machine? Thanks!
Tacking on what worked - changing this config setting:
/sys/fs/cgroup/memory/system.slice/jupyter.service/memory.limit_in_bytes to a higher number.
I can see here "Cgroup out of memory" means that instance has sufficient memory and the process that is being killed in a cgroup. For visualized workload this can be possible as docker containers can cause this issue.
a) To identify the cgroup
system-cgtop
b) Check the limit of the cgroup
cat /sys/fs/cgroup/memory/[CGROUP_NAME]/memory.limit_in_bytes
c) Adjust the limit, please adjust by editing configuration file of POD, memory limit for Docker Container. Update the limit for raw cgroup
echo [NUMBER_OF_BYTES] > /sys/fs/cgroup/memory/[CGROUP_NAME]/memory.limit_in_bytes
I am writing to a 930GB file (preallocated) on a Linux machine with 976 GB memory.
The application is written in C++ and I am memory mapping the file using Boost Interprocess. Before starting the code I set the stack size:
ulimit -s unlimited
The writing was very fast a week ago, but today it is running slow. I don't think the code has changed, but I may have accidentally changed something in my environment (it is an AWS instance).
The application ("write_data") doesn't seem to be using all the available memory. "top" shows:
Tasks: 559 total, 1 running, 558 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 98.5%id, 1.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1007321952k total, 149232000k used, 858089952k free, 286496k buffers
Swap: 0k total, 0k used, 0k free, 142275392k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4904 root 20 0 2708m 37m 27m S 1.0 0.0 1:47.00 dockerd
56931 my_user 20 0 930g 29g 29g D 1.0 3.1 12:38.95 write_data
57179 root 20 0 0 0 0 D 1.0 0.0 0:25.55 kworker/u257:1
57512 my_user 20 0 15752 2664 1944 R 1.0 0.0 0:00.06 top
I thought the resident size (RES) should include the memory mapped data, so shouldn't it be > 930 GB (size of the file)?
Can someone suggest ways to diagnose the problem?
Memory mappings generally aren't eagerly populated. If some other program forced the file into the page cache, you'd see good performance from the start, otherwise you'd see poor performance as the file was paged in.
Given you have enough RAM to hold the whole file in memory, you may want to hint to the OS that it should prefetch the file, reducing the number of small reads triggered by page faults, substituting larger bulk reads. The posix_madvise API can be used to provide this hint, by passing POSIX_MADV_WILLNEED as the advice, indicating it should prefetch the whole file.
I want to calculate my process memory (rss) at runtime in my application (c++/unix/multithreaded).Do we have any API to use for that.Please note that , I am aware of reading /proc/stat and getrusage() , but dont want to read/parse a system file from appication and getrusage() does not work in my linux distribution.
The whole intent was to check for memory leak caused by my application . I have even tried tracking memory by overloading new/malloc/calloc/realloc and get the memory allocation trakced, but even with thsese I am not able to track the whole memory allocated by process. It would be also helpfull if you can suggest the other probable areas where I should look for memory allocation/ memory leak other than the above stated APIs.
I am aware of Valgrind/mpatrol type of memory monitor tools .. but unfortunately it does not work with my application..
Thanks in advance
First, this kind of information is operating system specific. It has to be done differently on Linux, on MacOSX, on FreeBSD...
On Linux, the blessed way, is as every one told you, to use the /proc file system, which is how all the system utilities (e.g. top or ps) are retrieving that information (perhaps by using libproc which is just a wrapper around reads of /proc/ files).
Could you explain why reading e.g. /proc/self/statm or /proc/self/stat or /proc/self/status or /proc/self/maps is not possible for you?
Remember that these /proc/files are pseudo-files, and no actual slow I/O operation to disk is involved in reading them. And you have to read them sequentially, seeking (or stat-ing) them does not work.
It seems to me that
long process_size_in_pages(void)
{
long s = -1;
FILE *f = fopen("/proc/self/statm", "r");
if (!f) return -1;
// if for any reason the fscanf fails, s is still -1,
// with errno appropriately set.
fscanf(f, "%ld", &s);
fclose (f);
return s;
}
is the fastest way to retrieve that information. Why can't you do that?
You could use valgrind. By setting it in monitor mode and calling remote method (gdb) monitor full, it would give you the total, allocated, memory at run time. See this page for more information.
You can read /proc/${pid}/status, it looks like
Name: nginx
State: S (sleeping)
SleepAVG: 98%
Tgid: 11884
Pid: 11884
PPid: 11883
TracerPid: 0
Uid: 99 99 99 99
Gid: 99 99 99 99
FDSize: 64
Groups: 99
VmPeak: 23932 kB
VmSize: 23932 kB
VmLck: 0 kB
VmHWM: 4276 kB
VmRSS: 4276 kB
VmData: 3744 kB
VmStk: 88 kB
VmExe: 452 kB
VmLib: 3024 kB
VmPTE: 88 kB
StaBrk: 1a931000 kB
Brk: 1a974000 kB
StaStk: 7fffc224d560 kB
Threads: 1
SigQ: 0/73712
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000040001000
SigCgt: 0000000198016a07
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
Cpus_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,0000000f
Mems_allowed: 00000000,00000001
You can parse the VmRSS value.
I have a small application that I was running right now and I wanted to check if I have any memory leaks in it so I put in this piece of code:
for (unsigned int i = 0; i<10000; i++) {
for (unsigned int j = 0; j<10000; j++) {
std::ifstream &a = s->fhandle->open("test");
char temp[30];
a.getline(temp, 30);
s->fhandle->close("test");
}
}
When I ran the application i cat'ed /proc//status to see if the memory increases.
The output is the following after about 2 Minutes of runtime:
Name: origin-test
State: R (running)
Tgid: 7267
Pid: 7267
PPid: 6619
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 20 24 46 110 111 119 122 1000
VmPeak: 183848 kB
VmSize: 118308 kB
VmLck: 0 kB
VmHWM: 5116 kB
VmRSS: 5116 kB
VmData: 9560 kB
VmStk: 136 kB
VmExe: 28 kB
VmLib: 11496 kB
VmPTE: 240 kB
VmSwap: 0 kB
Threads: 2
SigQ: 0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000002004
SigCgt: 00000001800044c2
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: ffffffffffffffff
Cpus_allowed: 3f
Cpus_allowed_list: 0-5
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 120
nonvoluntary_ctxt_switches: 26475
None of the values did change except the last one, so does the mean there are no memory leaks?
But what's more important and what I would like to know is, if it is bad that the last value is increasing rapidly (about 26475 switches in about 2 Minutes!).
I looked at some other applications to compare how much non-volunary switches they have:
Firefox: about 200
Gdm: 2
Netbeans: 19
Then I googled and found out some stuff but it's to technical for me to understand.
What I got from it is that this happens when the application switches the processor or something? (I have an Amd 6-core processor btw).
How can I prevent my application from doing that and in how far could this be a problem when running the application?
Thanks in advance,
Robin.
Voluntary context switch occurs when your application is blocked in a system call and the kernel decide to give it's time slice to another process.
Non voluntary context switch occurs when your application has used all the timeslice the scheduler has attributed to it (the kernel try to pretend that each application has the whole computer for themselves, and can use as much CPU they want, but has to switch from one to another so that the user has the illusion that they are all running in parallel).
In your case, since you're opening, closing and reading from the same file, it probably stay in the virtual file system cache during the whole execution of the process, and youre program is being preempted by the kernel as it is not blocking (either because of system or library caches). On the other hand, Firefox, Gdm and Netbeans are mostly waiting for input from the user or from the network, and must not be preempted by the kernel.
Those context switches are not harmful. On the contrary, it allow your processor to be used to fairly by all application even when one of them is waiting for some resource.≈
And BTW, to detect memory leaks, a better solution would be to use a tool dedicated to this, such as valgrind.
To add to #Sylvain's info, there is a nice background article on Linux scheduling here: "Inside the Linux scheduler" (developerWorks, June 2006).
To look for a memory leak it is much better to install and use valgrind, http://www.valgrind.org/. It will identify memory leaks in the heap and memory error conditions (using uninitialized memory, tons of other problems). I use it almost every day.