"virtual memory exhausted" when building Docker image - c++

When building a Docker image, there's some compilations of C++ scripts and I ended up with errors like:
src/amun/CMakeFiles/cpumode.dir/build.make:134: recipe for target 'src/amun/CMakeFiles/cpumode.dir/cpu/decoder/encoder_decoder_state.cpp.o' failed
virtual memory exhausted: Cannot allocate memory
But when building the same .cpp code on the host machine, it works fine.
After some checking, the error message seems to be similar to the one that people get on a Raspberry Pi, https://www.bitpi.co/2015/02/11/how-to-change-raspberry-pis-swapfile-size-on-rasbian/
And after some more googling, this post on the Mac forum says that:
Swapfiles are dynamically created as needed, until either the disk is
full, or the kernel runs out of page table space. I do not think you
can change the page table space limits in the Mac OS X kernel. I have
not seen anything in all the years I've been using OS X.
Is there a way to increase the swap space for Docker build on Mac OS?
If not, how else can be done to overcome the "virtual memory exhausted" error when building a Docker image?

That does not seem trivial to do with XHyve.
As stated in this thread
I think the default size of the VM is 16GB. I kept running out of swap space even after bumping the ram on the VM up to 16GB.
Check if the method used for a VirtualBox VM would apply in XHyve: see "How to increase the swap space available in the boot2docker virtual machine?"
boot2docker ssh
export SWAPFILE=/mnt/sda1/swapfile
sudo dd if=/dev/zero of=$SWAPFILE bs=1024 count=4194304
sudo mkswap $SWAPFILE
sudo chmod 600 $SWAPFILE
sudo swapon $SWAPFILE
exit
Check also this Digital Ocean Setup, again to test in your XHyve context.
mkswap is also seen here or in docker-root-xhyve/contrib/makehdd/makehdd.sh.

Since you have enough available memory in your host, I recommend you to assign more memory to the Docker VM that is behind.
As stated here:
As I can see that you are on OSX, which runs docker over a Linux VM. Configure the max memory clicking the whale icon in the task bar. By default is 2G.
For further information: https://docs.docker.com/docker-for-mac/#memory

Related

Multipass log (multipassd.log) bloating disk

My multipassd.log had grown to 200+ GB before i noticed (because my disk was full). Stupid as i was i deleted the log with rm -rf multipassd.log (file was so big i couldn't open it). This apparently deleted the file without freeing the space on disk. So now i have 200+ GB of inaccessible disk space.
The space does not show up as used when checking the file system with the du command, even from the root directory. I also downloaded DaisyDisk, which showed that there were 200 GB of "hidden files" but couldn't access or delete them, even with all privileges enabled.
Eventually I fixed it, but if anyone can explain why rm -rf multipassd.log failed to free the disk space, that would be appreciated :)
After messing around for a couple hours, I fixed it by opening Console then creating a new log file and reloading the multipass launcher daemon with
$ sudo touch /Library/Logs/Multipass/multipassd.log
$ sudo launchctl unload /Library/LaunchDaemons/com.canonical.multipassd.plist
$ sudo launchctl load /Library/LaunchDaemons/com.canonical.multipassd.plist
Then I started an instance of Multipass and cleared the (almost empty) log from the already opened console. That freed up the disk space :)

java.lang.OutOfMemoryError when running bazel build

I have been trying to install the ONOS controller on my Ubuntu VM on my MAC computer following the steps in this link: Download ONOS code & Build ONOS.
However, the building process is not successful after executing the following command:
~/onos$ bazel build onos
The above command outputs the following:
Starting local Bazel server and connecting to it...
INFO: Analysed target //:onos (759 packages loaded, 12923 targets configured).
INFO: Found 1 target...
.
.
.
enconfig-native; [2,128 / 2,367] //models/openconfig:onos-models-openconfig-native; ERROR: /home/mohamedzidan/onos/models/openconfig/BUILD:11:1: Building models/openconfig/libonos-models-openconfig-native-class.jar (2 source jars) failed (Exit 1)
[2,128 / 2,367] //models/openconfig:onos-models-openconfig-native; An exception has occurred in the compiler (10.0.1). Please file a bug against the Java compiler via the Java bug reporting page (http://bugreport.java.com) after checking the Bug Database (http://bugs.java.com) for duplicates. Include your program and the following diagnostic in your report. Thank you.
java.lang.OutOfMemoryError: Java heap space
at jdk.compiler/com.sun.tools.javac.util.ArrayUtils.ensureCapacity(ArrayUtils.java:60)
at jdk.compiler/com.sun.tools.javac.util.SharedNameTable.fromUtf(SharedNameTable.java:132)
at jdk.compiler/com.sun.tools.javac.util.Names.fromUtf(Names.java:392)
at jdk.compiler/com.sun.tools.javac.util.ByteBuffer.toName(ByteBuffer.java:159)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter$CWSignatureGenerator.toName(ClassWriter.java:320)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter$CWSignatureGenerator.access$300(ClassWriter.java:266)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.typeSig(ClassWriter.java:335)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.writeMethod(ClassWriter.java:1153)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.writeMethods(ClassWriter.java:1653)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.writeClassFile(ClassWriter.java:1761)
at jdk.compiler/com.sun.tools.javac.jvm.ClassWriter.writeClass(ClassWriter.java:1679)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.genCode(JavaCompiler.java:743)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1641)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.generate(JavaCompiler.java:1609)
at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:959)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.lambda$doCall$0(JavacTaskImpl.java:100)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl$$Lambda$97/1225568095.call(Unknown Source)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.handleExceptions(JavacTaskImpl.java:142)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:96)
at jdk.compiler/com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:90)
at com.google.devtools.build.buildjar.javac.BlazeJavacMain.compile(BlazeJavacMain.java:113)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder$$Lambda$70/778731861.invokeJavac(Unknown Source)
at com.google.devtools.build.buildjar.ReducedClasspathJavaLibraryBuilder.compileSources(ReducedClasspathJavaLibraryBuilder.java:57)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder.compileJavaLibrary(SimpleJavaLibraryBuilder.java:116)
at com.google.devtools.build.buildjar.SimpleJavaLibraryBuilder.run(SimpleJavaLibraryBuilder.java:123)
at com.google.devtools.build.buildjar.BazelJavaBuilder.processRequest(BazelJavaBuilder.java:105)
at com.google.devtools.build.buildjar.BazelJavaBuilder.runPersistentWorker(BazelJavaBuilder.java:67)
at com.google.devtools.build.buildjar.BazelJavaBuilder.main(BazelJavaBuilder.java:45)
[2,128 / 2,367] //models/openconfig:onos-models-openconfig-native; Target //:onos failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 1386.685s, Critical Path: 117.31s
INFO: 379 processes: 125 linux-sandbox, 254 worker.
**FAILED: Build did NOT complete successfully**
Your output shows java.lang.OutOfMemoryError: Java heap space. You can increase the amount of memory available to javac with something like this:
BAZEL_JAVAC_OPTS="-J-Xms384m -J-Xmx512m"
If that still doesn't work, try progressively increasing sizes for -Xmx. This issue is discussed further at:
https://github.com/bazelbuild/bazel/issues/1308
Summary
If bazel runs out of memory while building, and you see this error:
java.lang.OutOfMemoryError: Java heap space
...then do this:
Increase your RAM or your virtual memory swap file size, to emulate having more RAM (details on how to do this are below).
From now on, build with this bazel command, for example, to give Bazel more heap space (RAM) while building. In this case I am giving it 32GB maximum RAM:
# Do this to give Bazel up to 32GB of RAM wile building
time bazel --host_jvm_args=-Xmx32g build //...
# ...instead of doing this
time bazel build //...
Details
If Bazel fails with any versions of the following error, it's because it ran out of heap space while trying to build.
Example error:
java.lang.OutOfMemoryError: Java heap space
I see that error in your output you pasted. Although very much not well-known, some monster-sized projects and mono-repos can require a heap of 16GB or more, so I recommend you just create a massive 32GB~64GB swap file (virtual memory) on your Linux build machine and let it run with it! Give it the whole thing to build!
CAUTION: if you have a standard HDD (spinning Hard Disk Drive), this may cause the build to run dozens or even hundreds of times slower than using physical RAM to build! This is because HDDs are HORRIBLY HORRIBLY HORRIBLY SLOW!
BUUUUT: If you have a 2.5" or 3.5" SSD (Solid State Drive), then it works ok, or 100x BETTER STILL IF YOU HAVE AN m.2 form-factor SSD! This is because an m.2 form-factor SSD is INCREDIBLY FAST, so you can get away with HUGE swap files being used in place of RAM all the time because these disks operate so fast!
If using a top-of-the-line internal m.2 form-factor SSD, I expect the following build with virtual memory to be only ~2x slower than using physical RAM only (of the same size) to build. If you have a super slow spinning HDD, however, the same build which takes 2 hrs using a swap file on the internal m.2 SSD might take up to multiple days or more using a swap file on a spinning HDD.
Your results may vary, of course, but favor a smaller JVM bazel heap (to use less of your virtual memory), the slower you expect your virtual memory (swap file) to be.
Increase your system’s swap file (virtual memory) to at least 32~64 GB. To add or remove a swapfile, follow the detailed instructions here: https://linuxize.com/post/how-to-add-swap-space-on-ubuntu-18-04/. UPDATE: use my own instructions here instead: How do I increase the size of swapfile without removing it in the terminal?. My instructions avoid the pitfalls of fallocate by using dd instead, as I explain in my answer there.
In short, here is how to add a swapfile:
sudo dd if=/dev/zero of=/swapfile count=64 bs=1G # Create a 64 GiB file
sudo mkswap /swapfile # turn this new file into swap space
sudo chmod 0600 /swapfile # only let root read from/write to it,
# for security
sudo swapon /swapfile # enable it
swapon --show # verify this new 64GB swap file is
# now active
sudo gedit /etc/fstab # edit the /etc/fstab file to make these
# changes persistent (load them each boot)
# ADD this line to bottom (w/out the # comment symbol):
# /swapfile none swap sw 0 0
cat /proc/sys/vm/swappiness # not required: verify your systems
# "swappiness" value. Note: values now range 0 to 200 (they used to only
# go up to 100), and have a default value of 60. I highly recommend
# you follow my instructions here to set your swappiness to 0,
# however, to improve your system's performance:
# https://askubuntu.com/a/1445347/327339
To resize or delete your swapfile: if you ever need to resize your swap file you just made above, you can delete it like this:
sudo swapoff -v /swapfile # turn swap file off
sudo swapon --show # verify the swap file is off
free -h # you can also look at this as an
# indication the swap file is off
sudo rm /swapfile # remove the swap file
Then, you can either follow the instructions above again to recreate it at a new size, or if you are permanently deleting it you'll need to edit your /etc/fstab file to remove the /swapfile none swap sw 0 0 line you previously added to the bottom of it.
Add --host_jvm_args=-Xmx32g to any bazel command, right after the word bazel. This sets the max Java Virtual Memory, or bazel build heap in this case, to 32GB, which goes into your swap file once your physical RAM is full. If you have a high-speed SSD drive, which will operate surprisingly well with swap, expect to wait a few hrs max for your build to complete, depending on the repo size. If you have an old spinning HDD, expect a repo that takes 2 hrs to buld with a swap file on an internal m.2 SSD to take maybe up to several days perhaps to build with a swap file on a slow spinning HDD--especially if it's an external instead of internal HDD.
Here is a sample full bazel command with this bazel startup option added, to build an entire repo:
time bazel --host_jvm_args=-Xmx32g build //...
...instead of this:
time bazel build //...
The time addition there just prints out a more readable printout of how long the build took is all (I like it). Just be sure to set your max Java Virtual Memory allotted to bazel for any bazel build command by putting --host_jvm_args=-Xmx32g (or similar) after the word bazel any time you need it.
Note that setting the max heap like we are doing here with -Xmx is NOT the same thing as setting the default heap like others might do with -Xms. Setting the max heap still starts with the default heap but lets it grow if needed. The other answer shows setting both via an environment variable.
Done!
References:
*****[my own answer] Ask Ubuntu: How do I increase the size of swapfile without removing it in the terminal?
https://linuxize.com/post/how-to-add-swap-space-on-ubuntu-18-04/
https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux/684792#684792
My answer: How do I configure swappiness?
See also:
https://github.com/bazelbuild/bazel/issues/1308

Centos7 Kickstart on VirtualBox/VMware Workstation

I am trying to create a kickstart for Centos 7.3. I have a windows desktop with VMware Workstation Player installed. I started with a dvd that has Centos 7.3 on it. I then created a vm in VMware Workstation Player and installed the os. I restarted the vm and copied over all the files from /dev/sr0 from my dvd to a new place. I copied the anaconda file and renamed it to ks.cfg. I then used the command below to make an iso.
mkisofs -o /home/kickstart.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-road-size 4 -boot-info-able -J -R -v "centos7.3"
Next I took this and burned it to a blank cd using
growisofs --dvd-compat -Z /dev/cdrom=/home/kickstart.iso
When I use this in VirtualBox as the optical drive mounted the installer gets stuck on
Started Show Plymouth Boot Screen
Started Device-Mapper Multipath Device Controller
Starting Open-iSCSI...
Reached target Paths.
Reached target Basic System.
Started Open-iSCSI.
Starting dracut initqueue hook..
then on VMware Workstation Player it goes to
Started Show Plymouth Boot Screen
Started Device-Mapper Multipath Device Controller
Starting Open-iSCSI...
Reached target Paths.
Reached target Basic System.
Started Open-iSCSI.
Starting dracut initqueue hook..
... [sda] Assuming cache: write though
Why is it hanging on these spots? I have tried looked everywhere and can't seem to find any solutions so far.
you've probably found something else for this but in case not, or in case someone else encounters this... I encountered some issues with this as well. I don't know if I have the exact issue, though it was hanging on dracut init, and changing this bit allowed the install to continue.
What it turned out to be was the -V flag on the mkisofs command. Whatever you name it with the -V flag (which it doesn't look like you have), it needs to be the value of LABEL in your /isolinux/isolinux.cfg file. In my fiddling I was using "MyLinuxISO" for this value.
in my /isolinux/isolinux.cfg:
label linux
menu label ^Install CentOS Linux 7 with KS
menu default
kernel vmlinuz
append initrd=initrd.img inst.stage2=hd:LABEL=MyLinuxISO ks=cdrom:/ks.cfg
using mkisofs
mkisofs -o /home/kickstart.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-road-size 4 -boot-info-able -J -R -v -V "MyLinuxISO"
Don't know if this will help you but give it a shot?
Cheers

Darcs push fails for no apparent reason between Windows and Linux using VMware

I have a strange Darcs issue here.
I'm running a VM with a Linux guest OS and a Windows host OS. I've set up /mnt as a "shared folder"; any files placed here are actually stored in a folder on the host OS. Among other things, this causes all files to have their permissions set to 666 root,root. (Naturally, Windows doesn't support Unix-style file modes.)
Here's what happened:
cd /mnt/some-random-folder
darcs init
cd ~/some-random-folder
darcs pull /mnt/some-random-folder
Create a few files
darcs add the files
darcs record
So far, everything works fine. But now...
user1:~/some-random-folder> darcs push
Pushing to "/mnt/some-random-folder"...
Sun Jan 20 12:11:50 GMT 2013 User1
* Update dependencies.
Shall I push this patch? (1/1) [ynW...], or ? for more options: y
darcs: ./_darcs/tentative_pristine-0: rename: permission denied (Permission denied)
Apply failed!
Erm... what the heck just happened??
(And, more to the point, how do I make it stop happening and actually work?)
I tried using cp to synchronise the repos, thinking maybe the problem was that I started with a totally empty repo with no patches. That changes the error message (now it can't open _darcs\index - permission denied), but it still doesn't actually work.
Edit: Darcs 2.8.1 release.
Having played with this further, it appears that accessing files on the host OS from the guest OS does all sorts of strange things. Stuff like, I delete a file, ls tells me the file is gone, but when I try to write to that file, it says it can't because it already exists. Unmount and remount the filesystem and the problem goes away.
In short, I think this is probably nothing to do with Darcs at all, and is just the VMware drives being strange / buggy.
Permissions can be a bit tricky. It's worth checking to see if all file in that /mnt/some-random-folder really are writable by everybody.
I suspect this is not the ideal forum for this sort of question because it will likely involve a lot of back-and-forth chat to figure out what's going on. How about the darcs-users mailing list, or the #darcs IRC channel instead?

VMWARE ESXi PANIC: Failed to find HD boot partition

I've got problems installating the VMWARE ESXi Server.
The Installation finishes without any error messages and prompts me to reboot.
After pressing Enter the System reboots. While booting through the yellow loading-screen it switches to black and displays the following Error-Message:
PANIC: Failed to find HD boot partition
All modules have been loaded without any errors.
After typing unsupported into the console the busybox comes up.
I tooked a look into the /dev/disks directory but no disk devices gets listed in difference to the installation process.
Switching to the system-console during installation both sata disks on MPC51 controller are shown.
The controllers are named vmhba0 and vmhba32.
Does anyone know how to solve the problem?!
Hardware is a ESPRIMO P5615 (nForce4) from Fujitsu-Siemens.
The only solution I have found is to run the server from a thumb drive and use the embedded hard drive to store your virtual servers. This solution worked for me.
To achieve this in this way you will need:
A USB thumb drive 1GB or larger
An active Linux machine (or, use a liveCD option on your PowerEdge such as Knoppix or Gentoo LiveCD)
Mount your ESXi ISO:
mount -t iso9660 -o loop VMware-VMvisor-InstallerCD-3.5.0_Update_2-110271.i386.iso /mnt/esx
Write the installer file to the thumb drive:
tar xvzf /mnt/esx/install.tgz usr/lib/vmware/installer/VMware-VMvisor-big-3.5.0_Update_2-110271.i386.dd.bz2 -O | bzip2 -d -c | dd of=/dev/sdb
Assumptions here (adjust to your settings):
/dev/sdb is where your thumb drive resides
VMware-VMvisor-InstallerCD-3.5.0_Update_2-110271.i386.iso is the name of your ISO file
usr/lib/vmware/installer/VMware-VMvisor-big-3.5.0_Update_2-110271.i386.dd.bz2 is the name of the dd file in your iso (run tar ztf /mnt/esx/install.tgz to see what your exact file name is, it should be similar and relatively obvious)
It will take a few minutes to write, and when it's done simply boot off of this thumb drive. The PowerEdge servers have an internal USB (at least mine does) if aesthetics are important to you.
Source: http://cyborgworkshop.org/2008/08/30/install-vmware-esxi-onto-a-usb-thumbdrive/
EDIT 12/19/2009: ESXi 4.0.0 uses image.tgz instead of install.tgz to store it's dd file