error: file /lib/modules/3.14.32-xxxx-grs-ipv6-64/kernel: No such file or directory
This system is not currently set up to build kernel modules (system extensions).
Running the following commands should set the system up correctly:
yum install devel-
(The last command may fail if your system is not fully updated.)
yum install devel
vboxdrv.sh: failed: Look at /var/log/vbox-install.log to find out what went wrong.
error: file /lib/modules/3.14.32-xxxx-grs-ipv6-64/kernel: No such file or directory
This system is not currently set up to build kernel modules (system extensions).
There were problems setting up VirtualBox. To re-start the set-up process, run
/sbin/vboxconfig
as root.
Please try to disable the secure boot option and try to install the virtual box in CentOS. I've also tried the following command with root but it didn't solve the problem. After disabling the secure boot option in bios fixed the problem.
/sbin/vboxconfig
Related
After installing the Startech USB Crash Cart Adapter I try to run the adapter by double clicking on the icon, and it never loads. I tried rebooting and the issue persists. When I run the adapter from the Terminal using usb-crash-cart-adapter I get the following Error:
I downloaded the Linux package from https://www.startech.com/Server-Management/KVM-Switches/Portable-USB-PS-2-KVM-Console-Adapter-for-Notebook-PCs~NOTECONS01. And I installed it using gdebi.
It appears to be a python dependency issue, but I am not sure where.
I had the same issue, which resulted in a phone call to startech and them publishing an updated driver, but that still didn't resolve the issue.
To fix it do the following steps.
CD to your directory where the file is located. Run cd /opt/usb-crash-cart-adapter/20180327/guts
In here you will see the libz.so.1 file. It is always good to make a copy of the original file just in case, so run cp libz.so.1 libz.so.1.old
Create a link to the existing libz.so.1 file on your system. Run, sudo ln -s /lib/x86_64-linux-gnu/libz.so.1
Reboot the machine.
After that your crash cart adapter should be good to go. In my case the icons where also not showing before, but were resolved with this fix.
I'm using cloud-init to configure my EC2 instances at launch time, currently just on CentOS 7. I need to upgrade to the latest kernel, etc so first I have:
package_upgrade: true
Then I add a bunch of repos and install some packages with yum that ultimately compile some kernel modules with DKMS (Nvidia drivers)
Finally I reboot the system with:
power_state:
mode: reboot
timeout: 30
This all works great! However, when the system comes back up, DKMS reports that the nvidia driver is "added" but not installed and the Nvidia driver doesn't work. If I yum reinstall nvidia-kmod everything works. So obviously what's happening is the kernel module is being compiled and installed for the previous kernel and not the new kernel.
So what is the suggested way to solve this? Is there a way to reboot after the package_upgrade but before any of the other steps? Is there a way to force nvidia-kmod to compile for the new kernel and not the current kernel? Any other ideas?
Looks like the only real option is to create a cloud-init per-boot script that runs dkms-autoinstall. This attempts to compile any "added" kernel module that aren't yet installed on every boot.
I can't manage to get an icecc daemon to connect to the local icecc-scheduler from any machine running Fedora 20.
I've had no issues setting this up on 5 different Ubuntu 14.04 machines, and each can run the scheduler with no issue. In fact, it appears to work out of the box with no additional config on Ubuntu - simple install and play.
In those cases on Ubuntu
sudo apt-get install icecc
sudo service iceccd start
And on one of the machines
sudo service icecc-scheduler start
Then simply setting the path and building like so
export PATH=/usr/lib/icecc/bin:$PATH
make -j16
This is all that is needed to get the distributed compile working on Ubuntu as far as I can see.
On Fedora installing and starting I use
sudo yum install icecream.x86_64
sudo systemctl start iceccd
And compiling with
export PATH=/usr/libexec/icecc/bin:$PATH
make -j16
This doesn't distribute the compile.
The icemon utility on the scheduler does not show any evidence of the fedora machine either and running a status on the iceccd service gives this error:
Jul 21 09:44:08 Fedora20 iceccd[4642]: [4642] 09:44:08: scheduler not yet found.
So far the only thing I've tried that might have been the issue is opening up the ports that the readme provides by adding them to the Zones->Ports part of Firewall Configuration , but this hasn't helped.
Maybe there is something I need to do on the Ubuntu schedular and daemons? Has anyone else had any luck with setting up icecream on Fedora 20?
For other future devs who might come here from google -
To get icecc working I edited the /usr/lib/systemd/system/icecc/iceccd-wrapper file by adding two arguments to the iceccd command.
-s <schedular> -m <number of jobs>
Then when running the following command
sudo systemctl start iceccd
The daemon starts up and is seen by the scheduler.
Remember the ports also need to be open!
Instead to editing either /usr/lib/systemd/system/icecc/iceccd-wrapper (like proposed by foips) or /usr/lib/systemd/system/iceccd.service itself, I found it more convenient to modify global icecream settings file /etc/sysconfig/icecream and set
# If the daemon can't find the scheduler by broadcast (e.g. because
# of a firewall) you can specify it.
#
ICECREAM_SCHEDULER_HOST="<scheduler>"
On Ubuntu 20.04 with ICECC 1.3.1 the config file is /etc/icecc/icecc.conf and the setting is called ICECC_SCHEDULER_HOST. You need to put the scheduler IP there.
Downloaded VirtualBox 4.3.6 and after attempting to install in Mavericks (OSX 10.9.1) I get a generic error "The installation failed".
Going through the logs and after running the uninstall tool I arrived at the conclusion that VirtualBox cannot unload there particular kernel extensions: org.virtualbox.kext.VBoxUSB, and org.virtualbox.kext.VBoxDrv.
The exact errors are:
(kernel) Can't unload kext org.virtualbox.kext.VBoxUSB; classes have instances:
(kernel) Kext org.virtualbox.kext.VBoxUSB class org_virtualbox_VBoxUSB has 1 instance.
Failed to unload org.virtualbox.kext.VBoxUSB - (libkern/kext) kext is in use or retained (cannot unload).
(kernel) Can't remove kext org.virtualbox.kext.VBoxDrv; services failed to terminate - 0xdc008018.
Failed to unload org.virtualbox.kext.VBoxDrv - (libkern/kext) kext is in use or retained (cannot unload).
Manually attempting to unload the kexts with sudo kextunload -b org.virtualbox.kext.VBoxUSB produces exact results.
Is there any way to remove these? I ran the VirtualBox uninstaller so I'm positive I don't need these for anything else yet they are preventing me from doing a clean VirtualBox install.
Repaired disk permissions, rebooted, ran uninstall script again, and the next installation was successful.
I was able to clean install 4.3.22-98236-OSX (which I had originally), but upgrading to 4.3.30-101610-OSX OR 5.0.0-101573-OSX would fail and throw an error during installation.
Removing /mach_kernel folder solved the "Failed to install" issue for me.
This is not about vagrant or virtualbox guest running slowly due to slow shared folder access, we know that can be resolved more or less by enabling nfs.
It's rather about mounted shared folder go out of sync when there are many file operations within the vm (enable nfs does not prevent it from happening) .
For example, when we are installing packages, like with php composer or node.js npm inside the vm, there is a certain probability that normal composer update or npm install will fail, and once it failed, only vagrant reload will help to restore the sync folder and allow the same command to pass without problem.
Such random failure only happens when executing on shared folder (nfs or not), so apt-get upgrade won't trigger the same problem as it runs within the vm folders.
Since the same sync problem does not appear when we run composer or npm from the host server, I am wondering what could have caused it and how do we go about debugging it?
Our vagrant setup and config:
if Vagrant::Util::Platform.windows?
config.vm.synced_folder "www", "/var/www", :extra => "dmode=777,fmode=777", :owner => "vagrant", :group => "vagrant"
else
config.vm.synced_folder "www", "/var/www", :extra => "dmode=777,fmode=777", :nfs => true
end
Guest: Ubuntu 12.04 LTS x64
Host: Windows 8, Mac OSX 10.8, Ubuntu 13 (yes, they all run into the same problem randomly)
Think we have more or less discover the source of problem:
Guest Addition version (4.1.x) that comes with our Ubuntu 12 LTS box does not match current Virtualbox version (4.2.x) installed on host machine. So file sync failed.
The easy fix:
run this command within vm sudo apt-get -y -q purge virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11 to remove old guest addition
install vagrant vbguest plugin so future update is taken care of automatically during up: https://github.com/dotless-de/vagrant-vbguest