I am trying, unsuccessfully, to run "minikube start" on my UBUNTU 18.04.
The characteristics of the system are:
Windows 10 PC with Virtual Box 6.1.16
In Virtual Box I have installed Ubuntu 18.04 (11GB of memory and 2 processors)
I've also enabled nested virtualization
In Ubuntu I also installed (searching with google):virtualbox-dkms and linux-headers-generic
Every time I run
minikube start
I get the following messages
minikube v1.15.1 on Ubuntu 18.04
Automaticaally selected the virtualbox driver
Downloading VM boot image ...
minikube-v1.15.0.iso.sha256: 65 B / 65 B [-------] 100.00%? p / s 0s
minikube-v1.15.0.iso: 181.00 MiB / 181.00 MiB [] 100.00% 6.45 MiB p / s 28s
Starting control plne node minikube in cluster minikube
Downloading Kubernetes v1.19.4 preload ...
preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4: 486.35 MiB
Creating virtualbox VM (CPU = 2, Memory = 2600MB, Disk = 20000MB) ...
and here it stops.
I also thought that the problem was related to the lz4 format and that the machine was unable to decompress the image.
I then installed
liblz4-tool
but without success.
Could you help me?
Thanks
Related
I'm always running XDP applications on servers with Intel Xeon Gold CPU's, performance was always good and was not a problem - up to 125Mpps with 100 GbE MCX515A-CCAT network card and 2 CPU's inside 1U server.
Today I was trying to make it work on AMD EPYC 7371 and for some reason the performance results were very low — maximum I've got is 13Mpps with the same network card and 1 AMD EPYC 7371. All cores are pushed to their max (100%).
Testing was done on ubuntu 18.04 (and 20.04 as well afterwards). I have installed the same Mellanox drivers as I do for intel (OFED), made other configurations as usual.
I run XDP in Driver mode, JIT enabled. Other tuning as follows:
sudo mlnx_tune -p HIGH_THROUGHPUT
sudo ethtool -G enp65s0 rx 512 tx 512
sudo ethtool -L enp65s0 combined 8
sudo ethtool --show-priv-flags enp65s0
sudo ethtool --set-priv-flags enp65s0 rx_cqe_compress on
Is there anything else I should do to run it on AMD CPU? Because performance just can't be so low and I guess I misconfigured something throughout?
A bit more info about our config: kernel 5.4 for 20.04 and 4.5 for 18.04, we generate traffic from our 2nd machine (TRex) which also has MCX515A-CCAT network card and is connected to 1st machine (AMD one) with Mellanox 100 GbE cable
I have tried two methods to upload openwrt x86_64 image to AWS AMI and run on EC2, but both failed.
The image I built runs ok on VirutalBox and vmware.
The first method - vm_import/export.
I followed instruction on https://amazonaws-china.com/cn/ec2/vm-import/, vm_import tool failed and said "Not found initrd in Grub" at last.
Openwrt doesn't use initrd at boot stage. This is the default boot entry of grub.cfg
menuentry "OpenWrt" {
linux /boot/vmlinuz root=PARTUUID=fbdad417-02 rootfstype=ext4 rootwait console=tty0 console=ttyS0,115200n8 noinitrd
}
The second method - ec2-bundle-image/ec2-upgrade-image
I tried this way, and it can upload image files and metadata files to S3, and I could make a new AMI, and launch EC2 instance. But EC2 instance was not be booted correctly it stop at the grubdom>.
I followed the instruction of https://forum.archive.openwrt.org/viewtopic.php?id=41588, it seems a little old, I didn't found the aki instance it mentioned and used a alternative one (aki-7077ab11 pv-grub-hd0_1.05-x86_64.gz).
Whatever the combined image(openwrt default built) or the custom image(release rootfs.tar.gz and copy kernel and grub config to it), both failed, here is EC2 instance system log:
Xen Minimal OS!
start_info: 0x10d4000(VA)
nr_pages: 0xe504a
shared_inf: 0xeeb28000(MA)
pt_base: 0x10d7000(VA)
nr_pt_frames: 0xd
mfn_list: 0x9ab000(VA)
mod_start: 0x0(VA)
mod_len: 0
flags: 0x300
cmd_line: root=/dev/sda1 ro console=hvc0 4
stack: 0x96a100-0x98a100
MM: Init
_text: 0x0(VA)
_etext: 0x7b824(VA)
_erodata: 0x97000(VA)
_edata: 0x9cce0(VA)
stack start: 0x96a100(VA)
_end: 0x9aa700(VA)
start_pfn: 10e7
max_pfn: e504a
Mapping memory range 0x1400000 - 0xe504a000
setting 0x0-0x97000 readonly
skipped 0x1000
MM: Initialise page allocator for 1809000(1809000)-e504a000(e504a000)
MM: done
Demand map pfns at e504b000-20e504b000.
Heap resides at 20e504c000-40e504c000.
Initialising timer interface
Initialising console ... done.
gnttab_table mapped at 0xe504b000.
Initialising scheduler
Thread "Idle": pointer: 0x20e504c050, stack: 0x1f10000
Thread "xenstore": pointer: 0x20e504c800, stack: 0x1f20000
xenbus initialised on irq 3 mfn 0xfeffc
Thread "shutdown": pointer: 0x20e504cfb0, stack: 0x1f30000
Dummy main: start_info=0x98a200
Thread "main": pointer: 0x20e504d760, stack: 0x1f40000
"main" "root=/dev/sda1" "ro" "console=hvc0" "4"
vbd 2049 is hd0
******************* BLKFRONT for device/vbd/2049 **********
backend at /local/domain/0/backend/vbd/27482/2049
2097152 sectors of 512 bytes
**************************
vbd 2064 is hd1
******************* BLKFRONT for device/vbd/2064 **********
backend at /local/domain/0/backend/vbd/27482/2064
8377344 sectors of 512 bytes
**************************
[H[J
GNU GRUB version 0.97 (3752232K lower / 0K upper memory)
[ Minimal BASH-like line editing is supported. For
the first word, TAB lists possible command
completions. Anywhere else TAB lists the possible
completions of a device/filename. ]
grubdom>
Any idea? thanks.
It is easy task which doesn't need any of the complicated setup.
I used Virtualbox, but any other virtualization can be used (e.g. VMware or Hyper-V)
By my experience, placing openwrt to AWS fails using any of import methods other than "importing snapshot"
download openwrt
https://downloads.openwrt.org/releases/19.07.5/targets/x86/64/
install openwrt on virtualbox and create ova
https://openwrt.org/docs/guide-user/virtualization/virtualbox-vm
2a) convert img to vdi
- example: VBoxManage convertfromraw --format VDI openwrt-x86-64-combined.img openwrt.vdi
2b) extend vdi to 1GB
- example: VBoxManage modifymedium openwrt.vdi --resize 1024
2c) boot openwrt
2d) change eth0 interface to dhcp
- example: vi /etc/config/network
2e) shutdown
2f) export VM to ova'
rename .ova to .zip
unzip .zip
by unzipping you get vmdk file of virtual disk
upload vmdk to AWS S3 bucket
add vmimport role to your account
https://www.msp360.com/resources/blog/how-to-configure-vmimport-role/
import vmdk as snapshot
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-import-snapshot.html
create new EC2 instance
replace EC2 instance volume with imported volume
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html
boot up
I performed a cluster node installation using this guide [OpenStack Charms Deployment Guide].(https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-maas.html), where the type of network is a Flat network and the components used are:
Maas
Juju
Openstack
My lab is composed by following devices:
1 IBM System 3540 M4 Maas (500GB HDD - 8GB RAM - 1 Nic)
1 IBM System 3540 M4 Juju (500GB HDD - 8GB RAM -1 Nic)
4 IBM System 3540 M4 Openstack (500GBx2 HDD - 16GB RAM - 2 Nic)
1 Palo Alto Network Firewall
Public Network 10.20.81.0/24 - Private Network 10.0.0.0/24
Maas: 10.20.81.1
Juju: 10.20.81.2
Openstack 10.20.81.21-24
Gateway 10.20.81.254
Instance: 10.0.0.9 - 10.20.81.215 (floating)
network plan
10.20.81.0/24
+-------------+
Firewall
10.20.81.254
+-------------+
|
+-------------------------------------------------------------+
Switch
vlan81 vlan81 vlan81
+-------------------------------------------------------------+
| | || | | |
+--------------+ +------------+ +------------------+
|Maas+Juju |Juju Gui| |Openstack
|10.20.81.1 |10.20.81.2 |10.20.81.21-24
+--------------+ +-------------+ +------------------+
|
+--------------------------------------------+
Private Subnet-1 Public Subnet-2
10.0.0.0/24 10.20.81.0/24
+---+----+--+ +----+------+
| | +----+ |
| | | | |
| +--------+ VR +-------------+
| | |
+--+-+ +----+
| |
| VM |
| .9 |
| |
+----+
On my lab, the nodes for Openstack present two eth interface, the first one (eno2) the single external network used as floating IP, then the other one (eno3) for the private network.
On Juju gui I've that:
neutron-gateway:
bridge-mappings: physnet1:br-ex
data-port: br-ex:eno2
neutron-api:
flat-network-providers: physnet1
I've opened this post https://ask.openstack.org/en/question/119783/no-route-to-instance-ssh-and-ping-no-route-to-host/ to resolve the problem about the Ping and Ssh connection to my instance, but during same check I've seen this issue on neutron-gateway:
error: "could not add network device eno2 to ofproto (Device or resource busy)"
Maybe that is the cause of my first issue, but I don't understand how I can fix it.
$:juju ssh neutron-gateway/0
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-46-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Tue Mar 19 16:07:19 UTC 2019
System load: 0.64 Processes: 409
Usage of /: 5.7% of 273.00GB Users logged in: 0
Memory usage: 13% IP address for lxdbr0: 10.122.135.1
Swap usage: 0% IP address for br-eno2: 10.20.81.21
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
3 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ovs-vsctl show output
ubuntu#os-compute01:~$ sudo ovs-vsctl show
6f8542aa-45d7-409d-8787-8983f3c643eb
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "eno2"
Interface "eno2"
error: "could not add network device eno2 to ofproto (Device or resource busy)"
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port "gre-0a145118"
Interface "gre-0a145118"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.20.81.21", out_key=flow, remote_ip="10.20.81.24"}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "tapb0b04b07-8f"
tag: 2
Interface "tapb0b04b07-8f"
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "tap2354468c-88"
tag: 4
Interface "tap2354468c-88"
Port "tap6d2b2fe0-47"
tag: 4
Interface "tap6d2b2fe0-47"
ovs_version: "2.10.0"
juju status
$:juju status
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-cloud-controller maas-cloud 2.5.1 unsupported 22:10:17Z
App Version Status Scale Charm Store Rev OS Notes
ceph-mon 13.2.4+dfsg1 active 3 ceph-mon jujucharms 31 ubuntu
ceph-osd 13.2.4+dfsg1 active 3 ceph-osd jujucharms 273 ubuntu
ceph-radosgw 13.2.4+dfsg1 active 1 ceph-radosgw jujucharms 262 ubuntu
cinder 13.0.2 active 1 cinder jujucharms 276 ubuntu
cinder-ceph 13.0.2 active 1 cinder-ceph jujucharms 238 ubuntu
glance 17.0.0 active 1 glance jujucharms 271 ubuntu
keystone 14.0.1 active 1 keystone jujucharms 288 ubuntu
mysql 5.7.20-29.24 active 1 percona-cluster jujucharms 272 ubuntu
neutron-api 13.0.2 active 1 neutron-api jujucharms 266 ubuntu
neutron-gateway 13.0.2 active 1 neutron-gateway jujucharms 256 ubuntu
neutron-openvswitch 13.0.2 active 3 neutron-openvswitch jujucharms 255 ubuntu
nova-cloud-controller 18.0.3 active 1 nova-cloud-controller jujucharms 316 ubuntu
nova-compute 18.0.3 active 3 nova-compute jujucharms 290 ubuntu
ntp 3.2 active 4 ntp jujucharms 31 ubuntu
openstack-dashboard 14.0.1 active 1 openstack-dashboard jujucharms 271 ubuntu
rabbitmq-server 3.6.10 active 1 rabbitmq-server jujucharms 82 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0 active idle 1/lxd/0 10.20.81.4 Unit is ready and clustered
ceph-mon/1 active idle 2/lxd/0 10.20.81.8 Unit is ready and clustered
ceph-mon/2* active idle 3/lxd/0 10.20.81.5 Unit is ready and clustered
ceph-osd/0 active idle 1 10.20.81.23 Unit is ready (1 OSD)
ceph-osd/1 active idle 2 10.20.81.22 Unit is ready (1 OSD)
ceph-osd/2* active idle 3 10.20.81.24 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 0/lxd/0 10.20.81.15 80/tcp Unit is ready
cinder/0* active idle 1/lxd/1 10.20.81.18 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.20.81.18 Unit is ready
glance/0* active idle 2/lxd/1 10.20.81.6 9292/tcp Unit is ready
keystone/0* active idle 3/lxd/1 10.20.81.20 5000/tcp Unit is ready
mysql/0* active idle 0/lxd/1 10.20.81.17 3306/tcp Unit is ready
neutron-api/0* active idle 1/lxd/2 10.20.81.7 9696/tcp Unit is ready
neutron-gateway/0* active idle 0 10.20.81.21 Unit is ready
ntp/0* active idle 10.20.81.21 123/udp chrony: Ready
nova-cloud-controller/0* active idle 2/lxd/2 10.20.81.3 8774/tcp,8775/tcp,8778/tcp Unit is ready
nova-compute/0 active idle 1 10.20.81.23 Unit is ready
neutron-openvswitch/1 active idle 10.20.81.23 Unit is ready
ntp/2 active idle 10.20.81.23 123/udp chrony: Ready
nova-compute/1 active idle 2 10.20.81.22 Unit is ready
neutron-openvswitch/2 active idle 10.20.81.22 Unit is ready
ntp/3 active idle 10.20.81.22 123/udp chrony: Ready
nova-compute/2* active idle 3 10.20.81.24 Unit is ready
neutron-openvswitch/0* active idle 10.20.81.24 Unit is ready
ntp/1 active idle 10.20.81.24 123/udp chrony: Ready
openstack-dashboard/0* active idle 3/lxd/2 10.20.81.19 80/tcp,443/tcp Unit is ready
rabbitmq-server/0* active idle 0/lxd/2 10.20.81.16 5672/tcp Unit is ready
Machine State DNS Inst id Series AZ Message
0 started 10.20.81.21 nbe8q3 bionic Openstack Deployed
0/lxd/0 started 10.20.81.15 juju-26461e-0-lxd-0 bionic Openstack Container started
0/lxd/1 started 10.20.81.17 juju-26461e-0-lxd-1 bionic Openstack Container started
0/lxd/2 started 10.20.81.16 juju-26461e-0-lxd-2 bionic Openstack Container started
1 started 10.20.81.23 pdnc7c bionic Openstack Deployed
1/lxd/0 started 10.20.81.4 juju-26461e-1-lxd-0 bionic Openstack Container started
1/lxd/1 started 10.20.81.18 juju-26461e-1-lxd-1 bionic Openstack Container started
1/lxd/2 started 10.20.81.7 juju-26461e-1-lxd-2 bionic Openstack Container started
2 started 10.20.81.22 yxkyet bionic Openstack Deployed
2/lxd/0 started 10.20.81.8 juju-26461e-2-lxd-0 bionic Openstack Container started
2/lxd/1 started 10.20.81.6 juju-26461e-2-lxd-1 bionic Openstack Container started
2/lxd/2 started 10.20.81.3 juju-26461e-2-lxd-2 bionic Openstack Container started
3 started 10.20.81.24 bgqsdy bionic Openstack Deployed
3/lxd/0 started 10.20.81.5 juju-26461e-3-lxd-0 bionic Openstack Container started
3/lxd/1 started 10.20.81.20 juju-26461e-3-lxd-1 bionic Openstack Container started
3/lxd/2 started 10.20.81.19 juju-26461e-3-lxd-2 bionic Openstack Container started
iptables
Any suggestions please. I am still unable to solve the problem. Thanks.
update 26/03/19:
On Juju gui I've that:
neutron-gateway:
bridge-mappings: physnet1:br-ex
data-port: br-ex:eno2
neutron-api:
flat-network-providers: physnet1
Before to make the deploy of Openstack I've changed data-port from br-ex:eno2 to br-ex:eno3
neutron-gateway:
bridge-mappings: physnet1:br-ex
data-port: br-ex:eno3
The issue on eno2 is been resolved but the ping to instance is still present.
System:
EC2 Instance Type: m5.2xlarge
OS: Ubuntu 14.04.5 LTS
Kernel Version: 4.4.0-1022-aws
vCPU: 8 Cores
Memory: 32Gib
Java Version: 1.8.0_171
We are running KAFKA in cluster mode with two kafka brokers(10.0.51.1 & 10.0.51.2) and three zookeeper nodes. I wanted to upgrade my AWS EC2 instance, so I ran sudo apt-get update and sudo apt-get upgrade and installed linux-aws kernel. After changing instance type, I started getting following error in my 10.0.51.2 kafka borker:
18/06/08 07:03:50 52902420 [kafka-network-thread-9092-0] ERROR kafka.network.Processor - Closing socket for /10.0.51.1 because of error
kafka.common.KafkaException: Wrong request type 18
at kafka.api.RequestKeys$.deserializerForKey(RequestKeys.scala:64)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:50)
at kafka.network.Processor.read(SocketServer.scala:450)
at kafka.network.Processor.run(SocketServer.scala:340)
at java.lang.Thread.run(Thread.java:748)
18/06/08 07:03:50 52902571 [kafka-network-thread-9092-2] ERROR kafka.network.Processor - Closing socket for /10.0.51.1 because of error
kafka.common.KafkaException: Wrong request type 16
at kafka.api.RequestKeys$.deserializerForKey(RequestKeys.scala:64)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:50)
at kafka.network.Processor.read(SocketServer.scala:450)
at kafka.network.Processor.run(SocketServer.scala:340)
at java.lang.Thread.run(Thread.java:748)
Only 1 broker is giving this error. But my KAFKA cluster is up and running, I can consume and produce from all topics.
Any help would be appreciated.
Thanks.
The client sent an ApiVersionsRequest (request code 18) and a ListGroupsRequest (request code 16). see source code
Probably not all brokers/consumers/producers are using the same version after the upgrade. Can you please verify and in case align the versions? new clients should be able to communicate to older version so probably the library used in your code is older.
The command vagrant up is failing and I don't know why.
$ egrep -v '^ *(#|$)' Vagrantfile
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise32"
end
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
[default] Importing base box 'precise32'...
[default] Matching MAC address for NAT networking...
[default] Setting the name of the VM...
[default] Clearing any previously set forwarded ports...
[default] Creating shared folders metadata...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for VM to boot. This can take a few minutes.
The VM failed to remain in the "running" state while attempting to boot.
This is normally caused by a misconfiguration or host system incompatibilities.
Please open the VirtualBox GUI and attempt to boot the virtual machine
manually to get a more informative error message.
$ vagrant status
Current machine states:
default poweroff (virtualbox)
The VM is powered off. To restart the VM, simply run `vagrant up`
$ VBoxManage list runningvms
$
Here are the messages in the VirtualBox log file, VBoxSVC.log:
$ cat ~/.VirtualBox/VBoxSVC.log
VirtualBox XPCOM Server 4.2.16 r86992 linux.amd64 (Jul 4 2013 16:29:59) release log
00:00:00.000499 main Log opened 2013-08-13T18:40:45.907580000Z
00:00:00.000508 main OS Product: Linux
00:00:00.000509 main OS Release: 3.6.11-4.fc16.x86_64
00:00:00.000510 main OS Version: #1 SMP Tue Jan 8 20:57:42 UTC 2013
00:00:00.000537 main DMI Product Name: X8DA3
00:00:00.000547 main DMI Product Version: 1234567890
00:00:00.000647 main Host RAM: 24103MB total, 17127MB available
00:00:00.000654 main Executable: /usr/local/VirtualBox/VBoxSVC
00:00:00.000655 main Process ID: 9417
00:00:00.000656 main Package type: LINUX_64BITS_GENERIC
00:00:00.110125 nspr-2 Loading settings file "/opt/tomcat/.VirtualBox/VirtualBox.xml" with version "1.12-linux"
00:00:00.110817 nspr-2 Failed to retrive disk info: getDiskName(/dev/md126p1) --> md126p1
00:00:00.264367 nspr-2 VDInit finished
00:00:00.275173 nspr-2 Loading settings file "/opt/tomcat/VirtualBox VMs/vagrant_getting_started_default_1376419129/vagrant_getting_started_default_1376419129.vbox" with version "1.12-linux"
00:00:05.288923 main ERROR [COM]: aRC=VBOX_E_OBJECT_IN_USE (0x80bb000c) aIID={29989373-b111-4654-8493-2e1176cba890} aComponent={Medium} aText={Medium '/opt/tomcat/VirtualBox VMs/vagrant_getting_started_default_1376419129/box-disk1.vmdk' cannot be closed because it is still attached to 1 virtual machines}, preserve=false
00:00:05.290229 Watcher ERROR [COM]: aRC=E_ACCESSDENIED (0x80070005) aIID={3b2f08eb-b810-4715-bee0-bb06b9880ad2} aComponent={VirtualBox} aText={The object is not ready}, preserve=false
$
Any advice would be greatly appreciated.
Had the same error on OSX. Restarting VirtualBox fixed it :S
sudo /Library/StartupItems/VirtualBox/VirtualBox restart
Also see: https://forums.virtualbox.org/viewtopic.php?t=5489
I solved the problem by re-installing VirtualBox and adding myself to the vboxusers group. The re-installation process printed a message indicating that VM users had to be a member of that group. I don't know if the re-installation was necessary or if being added to the group would have sufficed.
The host machine was 32bits (Ubuntu) and the guest was 64bit, I changed the guest to 32 and it solved the problem.
My understanding is that vboxusers group is related to accessing USB devices within the guest. Not sure why it is causing the issue. Normally, as a vagrant base box build guideline, audio and USB are both disabled.
As per the VirtualBox Manual => The vboxusers group
The Linux installers create the system user group vboxusers during installation. Any system user who is going to use USB devices from VirtualBox guests must be a member of that group. A user can be made a member of the group vboxusers through the GUI user/group management or at the command line with sudo usermod -a -G vboxusers username
Note that adding an active user to that group will require that user to log out and back in again. This should be done manually after successful installation of the package.
I had the same problem. It is because I did a wrong configuration on my Vagrantfile in the provider section. I had tried to make my VM machine more powerfull, with 2 cpus when i have on the machine host just one.
this often happens when you try to add more hardware to your VM machine but your host machine does not have the minimun requirements