I have a swig generated C++ code file of 24MB, nearly 5,00,000 lines of code. I am able to compile it when set the compiler Optimization level to xO0,but fails as soon as i add any other C++ compiler flags(like xprofile ...). I am using Solaris Studio 12.3 C++ compiler.
Below is the console error:
Element size (in bytes): 48
Table size (in elements): 2560000
Table maximum size: 134217727
Table size increment: 5000
Bytes written to disk: 0
Expansions required: 9
Segments used: 1
Max Segments used: 1
Max Segment offset: 134217727
Segment offset size:: 27
Resizes made: 0
Copies due to expansions: 4
Reset requests: 0
Allocation requests: 2827527
Deallocation requests: 267537
Allocated element count: 4086
Free element count: 2555914
Unused element count: 0
Free list size (elements): 0
ir2hf: error: Out of memory
Thanks in Advance.
I found this article suggesting that it has to do with the fact that Solaris the amount of memory for data segments.
Following the steps in the blog, try to remove the limit.
$ usermod -K defaultpriv=basic,sys_resource karel
Now logoff and logon again and change the limit:
$ ulimit -d unlimited
Then check that the limit has changed
$ ulimit -d
The output should be unlimited
Related
I'm running three Java 8 JVMs on a 64 bit Ubuntu VM which was built from a minimal install with nothing extra running other than the three JVMs. The VM itself has 2GB of memory and each JVM was limited by -Xmx512M which I assumed would be fine as there would be a couple of hundred MB spare.
A few weeks ago, one crashed and the hs_err_pid dump showed:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 196608 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
I restarted the JVM with a reduced heap size of 384MB and so far everything is fine. However when I currently look at the VM using the ps command and sort in descending RSS size I see
RSS %MEM VSZ PID CMD
708768 35.4 2536124 29568 java -Xms64m -Xmx512m ...
542776 27.1 2340996 12934 java -Xms64m -Xmx384m ...
387336 19.3 2542336 6788 java -Xms64m -Xmx512m ...
12128 0.6 288120 1239 /usr/lib/snapd/snapd
4564 0.2 21476 27132 -bash
3524 0.1 5724 1235 /sbin/iscsid
3184 0.1 37928 1 /sbin/init
3032 0.1 27772 28829 ps ax -o rss,pmem,vsz,pid,cmd --sort -rss
3020 0.1 652988 1308 /usr/bin/lxcfs /var/lib/lxcfs/
2936 0.1 274596 1237 /usr/lib/accountsservice/accounts-daemon
..
..
and the free command shows
total used free shared buff/cache available
Mem: 1952 1657 80 20 213 41
Swap: 0 0 0
Taking the first process as an example, there is an RSS size of 708768 KB even though the heap limit would be 524288 KB (512*1024).
I am aware that extra memory is used over the JVM heap but the question is how can I control this to ensure I do not run out of memory again ? I am trying to set the heap size for each JVM as large as I can without crashing them.
Or is there a good general guideline as to how to set JVM heap size in relation to overall memory availability ?
There does not appear to be a way of controlling how much extra memory the JVM will use over the heap. However by monitoring the application over a period of time, a good estimate of this amount can be obtained. If the overall consumption of the java process is higher than desired, then the heap size can be reduced. Further monitoring is needed to see if this impacts performance.
Continuing with the example above and using the command ps ax -o rss,pmem,vsz,pid,cmd --sort -rss we see usage as of today is
RSS %MEM VSZ PID CMD
704144 35.2 2536124 29568 java -Xms64m -Xmx512m ...
429504 21.4 2340996 12934 java -Xms64m -Xmx384m ...
367732 18.3 2542336 6788 java -Xms64m -Xmx512m ...
13872 0.6 288120 1239 /usr/lib/snapd/snapd
..
..
These java processes are all running the same application but with different data sets. The first process (29568) has stayed stable using about 190M beyond the heap limit while the second (12934) has reduced from 156M to 35M. The total memory usage of the third has stayed well under the heap size which suggests the heap limit could be reduced.
It would seem that allowing 200MB extra non heap memory per java process here would be more than enough as that gives 600MB leeway total. Subtracting this from 2GB leaves 1400MB so the three -Xmx parameter values combined should be less than this amount.
As will be gleaned from reading the article pointed out in a comment by Fairoz there are many different ways in which the JVM can use non heap memory. One of these that is measurable though is the thread stack size. The default for a JVM can be found on linux using java -XX:+PrintFlagsFinal -version | grep ThreadStackSize In the case above it is 1MB and as there are about 25 threads, we can safely say that at least 25MB extra will always be required.
I am investigating a native memory leak through WinDbg.
With consecutive dumps, !heap -s returns more heaps every dump. Number of heaps returned in the first dump: 1170, second dump: 1208.
There are three sizes of heaps that are returning a lot:
0:000> !heap -stat -h 2ba60000
heap # 2ba60000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
1ffa 1 - 1ffa (35.44)
1000 1 - 1000 (17.73)
a52 1 - a52 (11.44)
82a 1 - 82a (9.05)
714 1 - 714 (7.84)
64c 1 - 64c (6.98)
Most blocks refer to the same callstack:
777a5887 ntdll!RtlAllocateHeap
73f9f1de sqlsrv32!SQLAllocateMemory
73fc0370 sqlsrv32!SQLAllocConnect
73fc025d sqlsrv32!SQLAllocHandle
74c6a146 odbc32!GetInfoForConnection
74c6969d odbc32!SQLInternalDriverConnectW
74c6bc24 odbc32!SQLDriverConnectW
74c63141 odbc32!SQLDriverConnect
When will a new heap will be created, and how you would dive deeper into this to find the root cause?
If you are able to do live debugging, you can try to set a break:
bp ntdll!RtlCreateHeap "kc;gc"
will display the call stack and continue. Maybe you see the culprit.
Do also the same with ntdll!RtlDebugCreateHeap
I have this snippet of code:
if ((shmid = shmget(key, 512, IPC_CREAT | 0666)) < 0)
{
perror("shmget");
exit(1);
}
Whenever I set the number any higher than 2048, I get, an error that says:
shmget: Invalid argument
However when I run cat /proc/sys/kernel/shmall, I get 4294967296.
Does anybody know why this is happening? Thanks in advance!
The comment from Jerry is correct, even if cryptic if you haven't played with this stuff: "What about this: EINVAL: ... a segment with given key existed, but size is greater than the size of that segment."
He meant that the segment is already there - these segment are persistent - and it has size 2048.
You can see it among the other ones with:
$ ipcs -m
and you can remove your segment (beware: remove your one only) with:
$ ipcrm -M <key>
After that you should be able to create it larger.
man 5 proc refers to three variables related to shmget(2):
/proc/sys/kernel/shmall
This file contains the system-wide limit on the total number of pages of System V shared memory.
/proc/sys/kernel/shmmax
This file can be used to query and set the run-time limit on the maximum (System V IPC) shared memory segment size that can be created. Shared memory segments up to 1GB are now supported in the kernel. This value defaults to SHMMAX.
/proc/sys/kernel/shmmni
(available in Linux 2.4 and onward) This file specifies the system-wide maximum number of System V shared memory segments that can be created.
Please check you violated none of them. Note that shmmax and SHMMAX are in bytes and shmall and SHMALL are in the number of pages (the page size is usually 4 KB but you should use sysconf(PAGESIZE).) I personally felt your shmall is too large (2**32 pages == 16 TB) but not sure if it is harmful or not.
As for the definition of SHMALL, I got this result on my Ubuntu 12.04 x86_64 system:
$ ack SHMMAX /usr/include
/usr/include/linux/shm.h
9: * SHMMAX, SHMMNI and SHMALL are upper limits are defaults which can
13:#define SHMMAX 0x2000000 /* max shared seg size (bytes) */
16:#define SHMALL (SHMMAX/getpagesize()*(SHMMNI/16))
/usr/include/linux/sysctl.h
113: KERN_SHMMAX=34, /* long: Maximum shared memory segment */
I am trying to identify a huge memory growth in a linux application which runs around 20-25 threads. From one of those threads I dump the memory stats using the system call mallinfo . It shows the total allocated space as 1005025904 (uordblks). However, the top command shows a value of 8GB as total memory and 7GB as resident memory. Can some one explain this inconsistency?
Following is the full stat from mallinfo:
Total non-mmapped bytes (arena): 1005035520
# of free chunks (ordblks): 2
# of free fastbin blocks (smblks): 0
# of mapped regions (hblks): 43
Bytes in mapped regions (hblkhd): 15769600
Max. total allocated space (usmblks): 0
Free bytes held in fastbins (fsmblks): 0
Total allocated space (uordblks): 1005025904
Total free space (fordblks): 9616
Topmost releasable block (keepcost): 9584
The reason is mallinfo gives the stats of the main arena. To find details of all arena's you have to use the system call malloc_stats.
I recently faced flash overflow problem. After doing some optimization in code, I saved some flash memory and executed software successfully. I want to how much flash memory is saved through my changes. Please let me know how can I check for used flash / available flash memory. Also I want to how much flash is utilized by particular function/file.
Below mentioned are some info about my developing environment.
- Avr microcontroller with 64 k ram and 512 K flash.
- Using freeRtos.
- Using GNU C++ compiler.
- Using AVRATJTAGEICE for programming and Debugging.
Please let me know the solution.
Regards,
Jagadeep.
GCC's size program is what you're looking for.
size can be passed the full compiled .elf file. It will, by default, output something like this:
$ size linked-file.elf
text data bss dec hex filename
11228 112 1488 12828 321c linked-file.elf
This is saying:
There are 11228 bytes in the .text "section" of this file. This is generally for functions.
There are 112 bytes of initialized data: global variables in the program with initial values.
There are 1488 bytes of uninitialized data: global variables without initial values.
dec is simply the sum of the previous 3 values: 11228 + 112 + 1488 = 12828.
hex is simply the hexadecimal representation of the dec value: 0x321c == 12828.
For embedded systems, generally dec needs to be smaller than the flash size of your target device (or the available space on the device).
It is generally sufficient to simply watch the dec or text outputs of GCC's size command to monitor the size of your compiled code over time. A large jump in size often indicates a poorly implemented new feature or constexpr that are not getting compiled away. (Don't forget function-sections and data-sections).
Note: For AVR's, you'll want to use avr-size for checking the linked size of AVR .elf files. avr-size takes an extra argument of the target chip and will automatically calculate the percentage of used flash for your chosen chip.
GCC's size also works directly on intermediate object files.
This is particularly useful if you want to check the compiled size of functions.
You should see something like this excerpt:
$ size -A main.cpp.o
main.cpp.o :
section size addr
.group 8 0
.group 8 0
.text 0 0
.data 0 0
.bss 0 0
.text._Z8sendByteh 8 0
.text._ZN3XMC5IOpin7setModeENS0_4ModeE 64 0
.text._ZN7NamSpac6OptionIN5Clock4TimeEEmmEi 76 0
.text.Default_Handler 24 0
.text.HardFault_Handler 16 0
.text.SVC_Handler 16 0
.text.PendSV_Handler 16 0
.text.SysTick_Handler 28 0
.text._Z5errorPKc 8 0
.text._ZN7NamSpac5Motor2goEi 368 0
.text._ZN7NamSpac5Motor3getEv 12 0
.rodata.cst1 1 0
.text.startup.main 632 0
.text._ZN7NamSpac7Program3runEv 380 0
.text._ZN7NamSpac8Position4tickEv 24 0
.text.startup._GLOBAL__sub_I__ZN7NamSpac7displayE 292 0
.init_array 4 0
.bss._ZN5Debug9formatterE 4 0
.rodata._ZL10dispDigits 8 0
.bss.position 4 0
.bss.motorState 4 0
.bss.count 4 0
.rodata._ZL9diameters 20 0
.bss._ZN7NamSpac8diameterE 16 0
.bss._ZN5Debug3pinE 12 0
.bss._ZN7NamSpac7displayE 24 0
.rodata.str1.4 153 0
.rodata._ZL12dispSegments 32 0
.bss._ZL16diametersDisplay 10 0
.bss.loadAggregate 4 0
.bss.startCount 4 0
.bss._ZL15runtimesDisplay 10 0
.bss._ZN7NamSpac7runtimeE 16 0
.bss.startTime 4 0
.rodata._ZL8runtimes 20 0
.comment 111 0
.ARM.attributes 49 0
Total 2494
Please let me know the solution.
Sorry, there's no the solution! You've gotta getting through what's linked to your final ELF, and decide if it was linked by intend, or unwanted default.
Please let me know how can I check for used flash / available flash memory.
That primarily depends on your actual target hardware platform, so you have to manage to get your .text section fitting in there.
Also I want to how much flash is utilized by particular function/file.
The nm tool of the GCC binutils provides detailed information about any (global) symbol found in an ELF file and the space it occupies in it's associated section. You'll just need to grep the results for particular functions/classes/namespaces (best demangled!) to accumulate section type and symbol filtered outputs for analysis.
That's the approach, I've been using for a little tool called nmalyzr. Sorry to say, as it stands on the GIT repo, its not really working as intended (I've got working versions, that aren't pushed back).
In general, it's a good strategy to chase for code that has #include <iostream> statements (no matter if std::cout or alike are used or not, static instances are provided!), or unwanted newlib/libstdc++ bindings as for e.g. default exception handling.
Use size command from binutils on the generated elf file. As you seem to use an AVR chip, use avr-size.
To get the size of functions, use nm command from binutils (avr-nm on AVR chips).