I have this snippet of code:
if ((shmid = shmget(key, 512, IPC_CREAT | 0666)) < 0)
{
perror("shmget");
exit(1);
}
Whenever I set the number any higher than 2048, I get, an error that says:
shmget: Invalid argument
However when I run cat /proc/sys/kernel/shmall, I get 4294967296.
Does anybody know why this is happening? Thanks in advance!
The comment from Jerry is correct, even if cryptic if you haven't played with this stuff: "What about this: EINVAL: ... a segment with given key existed, but size is greater than the size of that segment."
He meant that the segment is already there - these segment are persistent - and it has size 2048.
You can see it among the other ones with:
$ ipcs -m
and you can remove your segment (beware: remove your one only) with:
$ ipcrm -M <key>
After that you should be able to create it larger.
man 5 proc refers to three variables related to shmget(2):
/proc/sys/kernel/shmall
This file contains the system-wide limit on the total number of pages of System V shared memory.
/proc/sys/kernel/shmmax
This file can be used to query and set the run-time limit on the maximum (System V IPC) shared memory segment size that can be created. Shared memory segments up to 1GB are now supported in the kernel. This value defaults to SHMMAX.
/proc/sys/kernel/shmmni
(available in Linux 2.4 and onward) This file specifies the system-wide maximum number of System V shared memory segments that can be created.
Please check you violated none of them. Note that shmmax and SHMMAX are in bytes and shmall and SHMALL are in the number of pages (the page size is usually 4 KB but you should use sysconf(PAGESIZE).) I personally felt your shmall is too large (2**32 pages == 16 TB) but not sure if it is harmful or not.
As for the definition of SHMALL, I got this result on my Ubuntu 12.04 x86_64 system:
$ ack SHMMAX /usr/include
/usr/include/linux/shm.h
9: * SHMMAX, SHMMNI and SHMALL are upper limits are defaults which can
13:#define SHMMAX 0x2000000 /* max shared seg size (bytes) */
16:#define SHMALL (SHMMAX/getpagesize()*(SHMMNI/16))
/usr/include/linux/sysctl.h
113: KERN_SHMMAX=34, /* long: Maximum shared memory segment */
Related
I'm writing unit tests for code which creates shared memory.
I only have a couple of tests. I make 4 allocations of shared memory and then it fails on the fifth.
After calling shmat() perror() says Too many open files:
template <typename T>
bool Attach(T** ptr, const key_type& key)
{
// shmemId was 262151
int32_t shmemId = shmget( key.key( ), ( size_t )0, 0644 );
if (shmemId < 0)
{
perror("Error: ");
return false;
}
*ptr = ( T * ) shmat(shmemId, 0, 0 );
if ( ( int64_t ) * ptr < 0 )
{
// Problem is here. perror() says 'Too many open files'
perror( "Error: ");
return false;
}
return true;
}
However, when I check ipcs -m -p I only have a couple of shared memory allocations.
T ID KEY MODE OWNER GROUP CPID LPID
Shared Memory:
m 262151 0x0000a028 --rw-r--r-- 3229 0
m 262152 0x0000a029 --rw-r--r-- 3229 0
In addition, when I check my OS shared memory limits sysctl -A | grep shm I get:
kern.sysv.shmall: 1024
kern.sysv.shmmax: 4194304
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 8
security.mac.posixshm_enforce: 1
security.mac.sysvshm_enforce: 1
Are these variables large enough/are they the cause/what values should I have?
I'm sure I edited the file to increase them and restarted machine but perhaps it hasn't accepted (this is on Mac/OSX).
Your problem may be elsewhere.
Edit: This may be a shmmni limit of macOS. See below.
When I run your [simplified] code on my system (linux), the shmget fails.
You didn't specify IPC_CREAT to the third argument. If another process has created the segment, this may be okay.
But, it doesn't/shouldn't like a size of 0. The [linux] man page states that it returns an error (errno set to EINVAL) if the size is less than SHMMIN (which is 1).
That is what happened on my system. So, I adjusted the code to use a size of 1.
This was done [as I mentioned] on linux.
macOS may allow a size of 0, even if that doesn't make practical sense. (e.g.) It may round it up to a page size.
For shmat, it returns (void *) -1.
But, some systems can have valid addresses that have the high bit set. (e.g.) 0xFFE0000000000000 is a valid address, but would fail your if test because casting that to int64_t will test negative.
Better to do:
if ((int64_t) *ptr == (int64_t) -1)
Or [possibly better]:
if ((void *) *ptr == (void *) -1)
Note that errno is not set/changed if the call succeeds.
To verify this, do: errno = 0 before the shmat call. If perror says "Success", then the shmat is okay. And, your current test needs to be adjusted as above--I'd do that change regardless.
You could also do (e.g):
printf("ptr=%p\n",*ptr);
Normally, errno starts as 0.
Note that there are some differences between macOS and linux.
So, if errno is ever set to "too many open files", this can be because the process has too many open files (EMFILE).
It might be because the system-wide limit is reached (ENFILE) but that is "file table overflow".
Note that under linux shmat can not generate EMFILE. However, it appears that under macOS it can.
However, if the number of calls to shmat is limited [as you mention], the shmat should succeed.
The macOS man page is a little vague as to what the limit is based on. However, I checked the FreeBSD man page for shmat and that says it is limited by the sysctl parameter: kern.ipc.shmseg. Your grep should have caught that [if applicable].
It is possible some other syscall elsewhere in the code is opening too many files. And, that syscall is not checking the error return.
Again, I realize you're running macOS.
But, if available, you may want to try your program under linux. For example, it has much larger limits from the sysctl:
kernel.shm_next_id = -1
kernel.shm_rmid_forced = 0
kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096
vm.hugetlb_shm_group = 0
Note that shmmni is the system-wide maximum number of shared memory segments.
Note that for macOS, shmmni is 32 (vs. 4096 for linux)!?!?
That means that the entire system can only have 32 open shared memory segments for any/all processes???
That seems very low. You can probably set this to a larger number and see if that helps.
Linux has the strace program and you could use it to monitor the syscalls.
But, macOS has dtruss: How to trace system calls of a program in Mac OS X?
A WRITEBLK command fails when the item reaches 2GB in size (item is truncated to 2147483647 bytes).
Using cat I was able to create an item larger than 2GB in the same directory, but opening it in UV gave a corrupt (negative) value for STATUS<4> (Number of bytes available to read).
uv 11.1.4
64bit Linux on a VM
64BIT_FILES = 1
You can make the universe files 32 or 64 bit (regardless of the OS). So you can do a FILEINFO call to see if the file is actually 64bit (even if the account is 64bit).
My guess is that there is an File system limitation on the file size. in the Rocket UniVerse documentation (page 927) it says:
If the device runs out of disk
space, WRITEBLK takes the ELSE clause and returns –4 to the STATUS
function.
Generally only 32 bit systems would be the hard limit on 2 GB, but maybe there is some kind of 32 bit process running in our 64 bit virtual machine that is producing the same effect. See here for a few leads: https://unix.stackexchange.com/questions/274380/file-size-limit
I have a swig generated C++ code file of 24MB, nearly 5,00,000 lines of code. I am able to compile it when set the compiler Optimization level to xO0,but fails as soon as i add any other C++ compiler flags(like xprofile ...). I am using Solaris Studio 12.3 C++ compiler.
Below is the console error:
Element size (in bytes): 48
Table size (in elements): 2560000
Table maximum size: 134217727
Table size increment: 5000
Bytes written to disk: 0
Expansions required: 9
Segments used: 1
Max Segments used: 1
Max Segment offset: 134217727
Segment offset size:: 27
Resizes made: 0
Copies due to expansions: 4
Reset requests: 0
Allocation requests: 2827527
Deallocation requests: 267537
Allocated element count: 4086
Free element count: 2555914
Unused element count: 0
Free list size (elements): 0
ir2hf: error: Out of memory
Thanks in Advance.
I found this article suggesting that it has to do with the fact that Solaris the amount of memory for data segments.
Following the steps in the blog, try to remove the limit.
$ usermod -K defaultpriv=basic,sys_resource karel
Now logoff and logon again and change the limit:
$ ulimit -d unlimited
Then check that the limit has changed
$ ulimit -d
The output should be unlimited
I'm running three Java 8 JVMs on a 64 bit Ubuntu VM which was built from a minimal install with nothing extra running other than the three JVMs. The VM itself has 2GB of memory and each JVM was limited by -Xmx512M which I assumed would be fine as there would be a couple of hundred MB spare.
A few weeks ago, one crashed and the hs_err_pid dump showed:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 196608 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
I restarted the JVM with a reduced heap size of 384MB and so far everything is fine. However when I currently look at the VM using the ps command and sort in descending RSS size I see
RSS %MEM VSZ PID CMD
708768 35.4 2536124 29568 java -Xms64m -Xmx512m ...
542776 27.1 2340996 12934 java -Xms64m -Xmx384m ...
387336 19.3 2542336 6788 java -Xms64m -Xmx512m ...
12128 0.6 288120 1239 /usr/lib/snapd/snapd
4564 0.2 21476 27132 -bash
3524 0.1 5724 1235 /sbin/iscsid
3184 0.1 37928 1 /sbin/init
3032 0.1 27772 28829 ps ax -o rss,pmem,vsz,pid,cmd --sort -rss
3020 0.1 652988 1308 /usr/bin/lxcfs /var/lib/lxcfs/
2936 0.1 274596 1237 /usr/lib/accountsservice/accounts-daemon
..
..
and the free command shows
total used free shared buff/cache available
Mem: 1952 1657 80 20 213 41
Swap: 0 0 0
Taking the first process as an example, there is an RSS size of 708768 KB even though the heap limit would be 524288 KB (512*1024).
I am aware that extra memory is used over the JVM heap but the question is how can I control this to ensure I do not run out of memory again ? I am trying to set the heap size for each JVM as large as I can without crashing them.
Or is there a good general guideline as to how to set JVM heap size in relation to overall memory availability ?
There does not appear to be a way of controlling how much extra memory the JVM will use over the heap. However by monitoring the application over a period of time, a good estimate of this amount can be obtained. If the overall consumption of the java process is higher than desired, then the heap size can be reduced. Further monitoring is needed to see if this impacts performance.
Continuing with the example above and using the command ps ax -o rss,pmem,vsz,pid,cmd --sort -rss we see usage as of today is
RSS %MEM VSZ PID CMD
704144 35.2 2536124 29568 java -Xms64m -Xmx512m ...
429504 21.4 2340996 12934 java -Xms64m -Xmx384m ...
367732 18.3 2542336 6788 java -Xms64m -Xmx512m ...
13872 0.6 288120 1239 /usr/lib/snapd/snapd
..
..
These java processes are all running the same application but with different data sets. The first process (29568) has stayed stable using about 190M beyond the heap limit while the second (12934) has reduced from 156M to 35M. The total memory usage of the third has stayed well under the heap size which suggests the heap limit could be reduced.
It would seem that allowing 200MB extra non heap memory per java process here would be more than enough as that gives 600MB leeway total. Subtracting this from 2GB leaves 1400MB so the three -Xmx parameter values combined should be less than this amount.
As will be gleaned from reading the article pointed out in a comment by Fairoz there are many different ways in which the JVM can use non heap memory. One of these that is measurable though is the thread stack size. The default for a JVM can be found on linux using java -XX:+PrintFlagsFinal -version | grep ThreadStackSize In the case above it is 1MB and as there are about 25 threads, we can safely say that at least 25MB extra will always be required.
I'm currently benchmarking an application built on Leveldb. I want to configure it in such a way that the key-values are always read from disk and not from memory.
For that, I need to limit the memory consumed by the program.
I'm using key-value pairs of 100 bytes each and 100000 of them, which makes their size equal to 10 MB. If I set the virtual memory limit to less than 10 MB using ulimit, I can't even run the command Makefile.
1) How can I configure the application so that the key value pairs are always fetched from the disk?
2) What does ulimit -v mean? Does limiting the virtual memory translate to limiting the memory used by the program on RAM?
Perhaps there is no need in reducing available memory, but simply disable cache as described here:
leveldb::ReadOptions options;
options.fill_cache = false;
leveldb::Iterator* it = db->NewIterator(options);
for (it->SeekToFirst(); it->Valid(); it->Next()) {
...
}