Icecast - Too many open files - centos7

I'm running an Icecast 2.4.3 server on a CentOS 7. When I have a lot of listeners, I receive these errors and everything stops working:
[2017-06-21 18:56:37] WARN connection/_accept_connection accept() failed with error 24: Too many open files
It's running on "ices" user:
sudo -u ices /opt/icecast/bin/icecast -c /opt/icecast/etc/icecast.xml -b
Or running as root with "changeowner" option to "ices" user.
I set limits.conf:
ices hard nofile 65536
ices soft nofile 65536
From ulimit:
[root#orfeu inweb]# su ices
[ices#orfeu inweb]$ ulimit -n
65536
But when I check the PID, I get:
tcp 0 0 <IP>:8000 0.0.0.0:* LISTEN 21650/icecast
[root#orfeu inweb]# cat /proc/21650/limits
Limit Soft Limit Hard Limit Units
...
Max open files 1024 4096 files
...
How can I fix this, to enforce 65536 file descriptors? Thank you.

Probably I found a solution. Need to check when the problem happens again.
I realize that limits.conf set the limits per user.
I found out a way to set the limits per process.
/usr/bin/prlimit -n30000 -p `cat /var/run/icecast.pid`
Now I have:
Max open files 30000 30000 files
I don't know if there's a way to always start "icecast" binary with these limits, or always need to run the command against the PID after run.

Related

java.lang.OutOfMemoryError when running bazel build

I have been trying to install the ONOS controller on my Ubuntu VM on my MAC computer following the steps in this link: Download ONOS code & Build ONOS.
However, the building process is not successful after executing the following command:
~/onos$ bazel build onos
The above command outputs the following:
Starting local Bazel server and connecting to it...
INFO: Analysed target //:onos (759 packages loaded, 12923 targets configured).
INFO: Found 1 target...
.
.
.
enconfig-native; [2,128 / 2,367] //models/openconfig:onos-models-openconfig-native; ERROR: /home/mohamedzidan/onos/models/openconfig/BUILD:11:1: Building models/openconfig/libonos-models-openconfig-native-class.jar (2 source jars) failed (Exit 1)
[2,128 / 2,367] //models/openconfig:onos-models-openconfig-native; An exception has occurred in the compiler (10.0.1). Please file a bug against the Java compiler via the Java bug reporting page (http://bugreport.java.com) after checking the Bug Database (http://bugs.java.com) for duplicates. Include your program and the following diagnostic in your report. Thank you.
java.lang.OutOfMemoryError: Java heap space
at jdk.compiler/com.sun.tools.javac.util.ArrayUtils.ensureCapacity(ArrayUtils.java:60)
at jdk.compiler/com.sun.tools.javac.util.SharedNameTable.fromUtf(SharedNameTable.java:132)
at jdk.compiler/com.sun.tools.javac.util.Names.fromUtf(Names.java:392)
at jdk.compiler/com.sun.tools.javac.util.ByteBuffer.toName(ByteBuffer.java:159)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter$CWSignatureGenerator.toName(ClassWriter.java:320)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter$CWSignatureGenerator.access$300(ClassWriter.java:266)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.typeSig(ClassWriter.java:335)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.writeMethod(ClassWriter.java:1153)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.writeMethods(ClassWriter.java:1653)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.writeClassFile(ClassWriter.java:1761)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.writeClass(ClassWriter.java:1679)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.genCode(JavaCompiler.java:743)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1641)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1609)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:959)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.lambda$doCall$0(JavacTaskImpl.java:100)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl$$Lambda$97/1225568095.call(Unknown Source)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.handleExceptions(JavacTaskImpl.java:142)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:96)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:90)
at com.google.devtools.build.buildjar.javac.BlazeJavacMain.compile(BlazeJavacMain.java:113)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder$$Lambda$70/778731861.invokeJavac(Unknown Source)
at com.google.devtools.build.buildjar.ReducedClasspathJavaLibraryBuilder.compileSources(ReducedClasspathJavaLibraryBuilder.java:57)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder.compileJavaLibrary(SimpleJavaLibraryBuilder.java:116)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder.run(SimpleJavaLibraryBuilder.java:123)
at com.google.devtools.build.buildjar.BazelJavaBuilder.processRequest(BazelJavaBuilder.java:105)
at com.google.devtools.build.buildjar.BazelJavaBuilder.runPersistentWorker(BazelJavaBuilder.java:67)
at com.google.devtools.build.buildjar.BazelJavaBuilder.main(BazelJavaBuilder.java:45)
[2,128 / 2,367] //models/openconfig:onos-models-openconfig-native; Target //:onos failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 1386.685s, Critical Path: 117.31s
INFO: 379 processes: 125 linux-sandbox, 254 worker.
**FAILED: Build did NOT complete successfully**
Your output shows java.lang.OutOfMemoryError: Java heap space. You can increase the amount of memory available to javac with something like this:
BAZEL_JAVAC_OPTS="-J-Xms384m -J-Xmx512m"
If that still doesn't work, try progressively increasing sizes for -Xmx. This issue is discussed further at:
https://github.com/bazelbuild/bazel/issues/1308
Summary
If bazel runs out of memory while building, and you see this error:
java.lang.OutOfMemoryError: Java heap space
...then do this:
Increase your RAM or your virtual memory swap file size, to emulate having more RAM (details on how to do this are below).
From now on, build with this bazel command, for example, to give Bazel more heap space (RAM) while building. In this case I am giving it 32GB maximum RAM:
# Do this to give Bazel up to 32GB of RAM wile building
time bazel --host_jvm_args=-Xmx32g build //...
# ...instead of doing this
time bazel build //...
Details
If Bazel fails with any versions of the following error, it's because it ran out of heap space while trying to build.
Example error:
java.lang.OutOfMemoryError: Java heap space
I see that error in your output you pasted. Although very much not well-known, some monster-sized projects and mono-repos can require a heap of 16GB or more, so I recommend you just create a massive 32GB~64GB swap file (virtual memory) on your Linux build machine and let it run with it! Give it the whole thing to build!
CAUTION: if you have a standard HDD (spinning Hard Disk Drive), this may cause the build to run dozens or even hundreds of times slower than using physical RAM to build! This is because HDDs are HORRIBLY HORRIBLY HORRIBLY SLOW!
BUUUUT: If you have a 2.5" or 3.5" SSD (Solid State Drive), then it works ok, or 100x BETTER STILL IF YOU HAVE AN m.2 form-factor SSD! This is because an m.2 form-factor SSD is INCREDIBLY FAST, so you can get away with HUGE swap files being used in place of RAM all the time because these disks operate so fast!
If using a top-of-the-line internal m.2 form-factor SSD, I expect the following build with virtual memory to be only ~2x slower than using physical RAM only (of the same size) to build. If you have a super slow spinning HDD, however, the same build which takes 2 hrs using a swap file on the internal m.2 SSD might take up to multiple days or more using a swap file on a spinning HDD.
Your results may vary, of course, but favor a smaller JVM bazel heap (to use less of your virtual memory), the slower you expect your virtual memory (swap file) to be.
Increase your system’s swap file (virtual memory) to at least 32~64 GB. To add or remove a swapfile, follow the detailed instructions here: https://linuxize.com/post/how-to-add-swap-space-on-ubuntu-18-04/. UPDATE: use my own instructions here instead: How do I increase the size of swapfile without removing it in the terminal?. My instructions avoid the pitfalls of fallocate by using dd instead, as I explain in my answer there.
In short, here is how to add a swapfile:
sudo dd if=/dev/zero of=/swapfile count=64 bs=1G # Create a 64 GiB file
sudo mkswap /swapfile # turn this new file into swap space
sudo chmod 0600 /swapfile # only let root read from/write to it,
# for security
sudo swapon /swapfile # enable it
swapon --show # verify this new 64GB swap file is
# now active
sudo gedit /etc/fstab # edit the /etc/fstab file to make these
# changes persistent (load them each boot)
# ADD this line to bottom (w/out the # comment symbol):
# /swapfile none swap sw 0 0
cat /proc/sys/vm/swappiness # not required: verify your systems
# "swappiness" value. Note: values now range 0 to 200 (they used to only
# go up to 100), and have a default value of 60. I highly recommend
# you follow my instructions here to set your swappiness to 0,
# however, to improve your system's performance:
# https://askubuntu.com/a/1445347/327339
To resize or delete your swapfile: if you ever need to resize your swap file you just made above, you can delete it like this:
sudo swapoff -v /swapfile # turn swap file off
sudo swapon --show # verify the swap file is off
free -h # you can also look at this as an
# indication the swap file is off
sudo rm /swapfile # remove the swap file
Then, you can either follow the instructions above again to recreate it at a new size, or if you are permanently deleting it you'll need to edit your /etc/fstab file to remove the /swapfile none swap sw 0 0 line you previously added to the bottom of it.
Add --host_jvm_args=-Xmx32g to any bazel command, right after the word bazel. This sets the max Java Virtual Memory, or bazel build heap in this case, to 32GB, which goes into your swap file once your physical RAM is full. If you have a high-speed SSD drive, which will operate surprisingly well with swap, expect to wait a few hrs max for your build to complete, depending on the repo size. If you have an old spinning HDD, expect a repo that takes 2 hrs to buld with a swap file on an internal m.2 SSD to take maybe up to several days perhaps to build with a swap file on a slow spinning HDD--especially if it's an external instead of internal HDD.
Here is a sample full bazel command with this bazel startup option added, to build an entire repo:
time bazel --host_jvm_args=-Xmx32g build //...
...instead of this:
time bazel build //...
The time addition there just prints out a more readable printout of how long the build took is all (I like it). Just be sure to set your max Java Virtual Memory allotted to bazel for any bazel build command by putting --host_jvm_args=-Xmx32g (or similar) after the word bazel any time you need it.
Note that setting the max heap like we are doing here with -Xmx is NOT the same thing as setting the default heap like others might do with -Xms. Setting the max heap still starts with the default heap but lets it grow if needed. The other answer shows setting both via an environment variable.
Done!
References:
*****[my own answer] Ask Ubuntu: How do I increase the size of swapfile without removing it in the terminal?
https://linuxize.com/post/how-to-add-swap-space-on-ubuntu-18-04/
https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux/684792#684792
My answer: How do I configure swappiness?
See also:
https://github.com/bazelbuild/bazel/issues/1308

gsutil rsync: amount transfered differs from `du`

I'm trying to backup a large directory (several terabytes) to google cloud, using the following command:
gsutil -m rsync -r -e local_dir/ gs://target/bucket
In summary, run in parallel (-m), recursively (-r) search the directory local_dir/ (don't follow symlinks -e), and store it remotely in the bucket gs://target/bucket.
This operation completes succesfully:
[666.4k/666.4k files][ 6.3 TiB / 6.3 TiB] 100% Done
Operation completed over 666.4k objects/6.3 TiB.
One thing that worries me, however, is that the folder size is different when I run du:
$ du --max-depth 1 -h local_dir/
...
7.6T local_dir
Can anyone explain where the discrepancy of more than 1 TiB comes from, compared to what gsutil transfered, and what du reports?
Part of the difference is that Linux du is reporting in units of terabytes (10^12 bytes), while gsutil cp is reporting in units of tebibytes (2^40). Thus, the Linux du units are 1.0995 times larger than the gsutil cp units. Additionally, directories and inodes consume space beyond the file data bytes. For example, if you run these commands:
mkdir tmp
cd tmp
for f in {1..1000};do
touch $f
done
du -h
it reports 24K used, even though each of the files is empty (so, an average of 2.4k bytes per inode). And if you remove the temp files and run du -s on the directory, it consumes 4k bytes. So, your 666.4k files would consume approx 16 MB plus however much for the number of directories that were contained. Also, the amount used may differ depending on the type of filesystem you're using. The numbers I reported above are for an ext4 filesystem running on Debian Linux.

Increasing memory limit for Drush 9 (without changing php.ini file)

I'm trying to run:
drush updb
with drush 9.3.0 on my Drupal 8 site and I'm getting error:
The command could not be executed successfully (returned: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 32 bytes) in /home/myproject/www/staging.myproject.ch/core/lib/Drupal/Core/Routing/CompiledRoute.php on line 163
Tried to run drush like:
php -d memory_limit=1024M vendor/bin/drush ev "echo ini_get('memory_limit')"
and I do get 1024M value, but when I run updb like that I still get previouse memory message.
Here:
https://github.com/drush-ops/drush/issues/3294
..I saw that drush 9 is running some tasks in sub-processes and that most likely it's the case with updb command, so even drush starts with increased memory limit sub-task get's default one.
How can I increase memory limit for drush 9 without changing php.ini file?
And answer to this is:
echo "memory_limit = 512M" > drush.ini
PHPRC=./drush.ini php vendor/bin/drush updb
rm drush.ini
I guess this drush.ini can be "normal" static file, but since all this is required because of un-matching server settings (lack PHP of memory) maybe it should not be part of the project...

Hugepages exist, but are not free nor reserved. Or, how do I free hugepages?

I'm running an SPDK experiment (which uses DPDK, which in turn uses hugepages) and it was working yesterday. I'm running them in a shared enviroment (I think one or two more people use this machine for other stuff). Now, whenever I try to run it, I get a no free hugepages error.
Output of /proc/meminfo is:
HugePages_Total: 1024
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Output of mount:
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb)
Something that worked on my last environment doesn't work anymore:
umount -a -t hugetlbfs
mount -t hugetlbfs nodev /mnt/huge
Then the output of /proc/meminfo is
HugePages_Total: 1024
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 1024
But if I try running it:
EAL: No free hugepages reported in hugepages-1048576kB
EAL: No free hugepages reported in hugepages-2048kB
PANIC in rte_eal_init():
Cannot get hugepage information
Why are these pages surplus and not free? Is there any way I can free them? I want to restart the system since there might be other jobs running or people using it.
edit: Restarted the machine, allocated more hugepages and they were free. Executed the test, it crashed and now the hugepages are lost again.
Relevant questions with no working answer (at least for me):
How to release hugepages from the crashed application
How to really free hugepages in Linux for use by a new process?
If you follow the instruction below, you can get rid of the allocated hugepages:
1) Let's check the hugepages which were free at restart
dpdk#dpdkvm:~$ ls /mnt/huge/
empty
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ cat /proc/meminfo
...
HugePages_Total: 256
HugePages_Free: 256
...
2) Starting a dpdk application with wrong parameters, producing an error
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ sudo ./build/kni -c 0x03 -n 2 -- -P -p 0x03 --config="(0,0,1),(1,0,1)"
...
EAL: Error - exiting with code: 1
Cause: No supported Ethernet device found
3) When I check hugepages, there is not any free
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ cat /proc/meminfo
...
HugePages_Total: 256
HugePages_Free: 0
...
4) Now, when I check the mounted hugepage directory, I can see the files which are not given back to OS by dpdk application.
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ ls /mnt/huge/
...
rtemap_0 rtemap_137 rtemap_176 rtemap_214 rtemap_253 rtemap_62
...
5) Finally, if you remove the files starting with rtemap, you can give the hugepages back
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ sudo rm /mnt/huge/*
[sudo] password for dpdk:
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ cat /proc/meminfo
...
HugePages_Total: 256
HugePages_Free: 256
...

Can't open serial port on Matlab UNIX

I'm trying to establsih serial communication with an Arduino through USB (running Arch Linux). I can do it in a straightforward way trough a C++ program and using boost::asio, but recently I installed Matlab and been encoutering some issues. I manage to create the serial object with s0=serial('/dev/ttyACM0') but when I call fopen(s0) I get the following error:
Error using serial/fopen (line 72)
Open failed: Port: /dev/ttyACM0 is not available. No ports are available.
Here is what I did to get serial port communication work in Matlab R2014a on Arch Linux 64 bit:
1a) follow the steps described here: http://www.matlabarduino.org/serial-communication.html:
sudo chmod 777 /dev/ttyACM0
Alternatively, add your user to the group uucp: > sudo gpasswd --add username uucp
sudo nano $MATLABROOT/bin/$ARCH/java.opts --> add: -Dgnu.io.rxtx.SerialPorts=/dev/ttyS0:/dev/ttyUSB0:/dev/ttyACM0
1b) check that the connection works in gtkterm (select port ttyACM0)
2) additionally (critical only for Matlab):
sudo chmod 777 /run/lock
/run/lock was symlinked from /var/lock on my distro, so you might have to do this with the latter dir (was 755); alternatively, you can manage access rights to /run/lock/ by ACL.
How I got to this solution:
sudo strace -p 4668 -f -s100 2>&1 | grep -C3 --color -i -e /dev -e serialports -e uucp
-p: process ID == second column from > sudo ps -aux | grep -i matlab
Then, in Matlab type >> sps=instrhwinfo('serial') (which in my case always returned a structure of empty cell-arrays) and monitor the output of strace.
Hope that helps!
cheers :)
By default, only root can use the serial port.
And you can add your id to the serial group "dialout", so you can use the serial port.
Just make soft link from /dev/ttyACM0 to /dev/ttyS[0-255].
ln -sf /dev/ttyACM0 /dev/ttyS100 # for example
Below Matlab R2017a may face this issue.
Detailed Description can find here:
Why is my serial port not recognized with MATLAB on Linux or Solaris?
Hope this can be helped.