How should I use the command vagrant destroy?
In my VagrantFile I used vm.config.name = 'websvr' and when I open Virtualbox I can see websvr on the list of Vm's.
But whenever I use vagrant destroy websvr it returns:
The machine with the name 'websvr' was not found configured for this Vagrant environment.
How does vagrant destroy work?
It seems the item does not exist, but it appears in the list because it is present in the cache. Use vagrant global-status --prune to get rid of it.
See vagrant global-status documentation for more details.
Lets try these action in command line
Check available installed boxes by calling
vagrant box list
Find box id
vagrant global-status --prune
Select by id name of your box for destroying.
vagrant destroy 1a2b3c4d
Thats all for you. Now you can destroy your vagrant box vagrant destroy xxxxxxx by this command.
Try running vagrant status first, which should list all of your VMs with their current status (running, not created, etc.)
The names of the VMs are displayed in the first column and are case sensitive.
For example, this is what the output of vagrant status looks like on my machine:
base not created (virtualbox)
git not created (virtualbox)
go not created (virtualbox)
dev_workstation not created (virtualbox)
single_instance not created (virtualbox)
metrics not created (virtualbox)
To completely clean VM and start from fresh - the below worked for me - basically combination of what others have said already.
Check VM status with vagrant locally and destroy if exists - all done inside vagrant folder - MAKE SURE you are in the correct folder!
$ vagrant status
$ vagrant destroy
$ rm -rf .vagrant
Check VM status with vagrant globally and "destroy" if exists - can be done from anywhere
$ vagrant global-status
$ vagrant global-status --prune
Check VM status with VirtualBox's perspective and unregister VM
$ vboxmanage list vms
### note down long id, eg. c43266e6-e22b-437a-8cc1-541b7ed5c4b
$ vboxmanage unregistervm <long id> --delete
Go back into appropriate vagrant folder and start VM
$ vagrant up
To destroy the vagrant you may try these simple steps:
You need to exit the ssh if you are already running vagrant ssh command you can type in exit to come out of vagrant ssh.
Once you are out of vagrant type:
vagrant destroy -f.
If these don't work out for you you may try it using bash.
Jump into the project folder where your actual code resides. Right-click and press git bash here. You will see a bash window popping up so just type in the same command in bash window: vagrant destroy -f.
I hope these simple steps work out for you.
Related
explaining all that has been tried and double checked.
Set up on local windows machine:
Xming installed and running.
in ssh_config ForwardX11 is set to yes.
In VS code remote connection config the the Forward X11 is set to yes.
Set up on GCP compute engine with Debian / Linux 9 and 1 GPU[free tier]:
xauth is installed.
In the sshd_config file below is set:
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
The sshserver has be restarted to ensure below setting are read .
from local workstation I fire gcloud compute ssh --ssh-flag="-X" tensorflow-2-vm(instance name) and the response is :
/usr/bin/xauth: file /home/user/.Xauthority does not exist,
So, I attempted to perform the below on the remote compute engine with instance name - tensorflow-2-vm and user trapti_kalra:
trapti_kalra#tensorflow-2-vm:~$ xauth list
xauth: file /home/trapti_kalra/.Xauthority does not exist
trapti_kalra#tensorflow-2-vm:~$ mv .Xauthority old.Xauthority
mv: cannot stat '.Xauthority': No such file or directory
trapti_kalra#tensorflow-2-vm:~$ touch ~/.Xauthority
trapti_kalra#tensorflow-2-vm:~$ xauth generate :0 . trusted
xauth: (argv):1: unable to open display ":0".
trapti_kalra#tensorflow-2-vm:~$ sudo xauth generate :0 . trusted
xauth: file /root/.Xauthority does not exist
xauth: (argv):1: unable to open display ":0".
so, looks like something is missing, any help will be appreciated. This was working with a EC2 server before I moved to GCP.
Create n new file: touch ~/.Xauthority
Log out and back in again with your ssh session. (I'm using MobaXterm)
Then it writes the needed.
You logged into your Linux server over ssh and got the following error;
.Xauthority does not exist
Solution :
Let's go into the /etc/ssh/sshd_config file and remove the # sign at the beginning of the 3 lines below
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
Then systemctl restart sshd
Login again and you will not get the error.
There are many solutions to this problem, it can also depend on what machine you originate from. If you come from a Linux box, enabling sshd config options like:
X11Forwarding yes
could be enough.
When you use a Macbook however the scenario is different. In that case, you need to install xQuartz with brew:
brew install xquartz
And after this start it:
xQuartz &
After this is done the xQuartz logo appears in your bar and you can right-click the icon and start the terminal from the Applications menu. After you perform this you can run the following:
echo $DISPLAY from this terminal. This should give you the output:
:0
When you have another terminal such as iTerm, you can export this value in another terminal with export DISPLAY=:0 As long as xQuartz is still running the other terminal should be able to continue to use xQuartz.
After this you can SSH into the remote machine and check if the display variable is set:
$: ssh -Y anldisr#my-remote-machine
$: echo $DISPLAY
localhost:11.0
It took me a hour to figure this out, hope it helps someone. :)
This also happened when I added a new user to remote machine without giving the user a sudo privilege during creation.
To resolve, I used the root user or a sudo privileged user to assign a sudo privilege to the new user. Exit the new user and ssh again into your server.
> $ sudo usermod -aG sudo [newUser]
Whenever an AWS autoscaling group launches new ubuntu instance and I try to install any package on that instance it gives me the following error:
[stderr]E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)
[stderr]E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend),
Is there another process using it?
I tried to find a solution and manually fixed it but I don't know why whenever the autoscaling group launches a new ubuntu instance it gives the following error.
When any command updates the Ubuntu or installs a new application, it locks the dpkg(Debian Package Manager).
To identify the problem, please look at the logs
If your system is installing some updates you may find journalctl logs journalctl -u apt-daily.service. This usually happend when the system is set to update itslef and you will notice such activity with this ps -ef | grep apt.systemd.daily and you can check these setting in the file /etc/apt/apt.conf.d/20auto-upgrades
/var/log/dpkg.log*(as it may get rotated) check these logs to find which all services were trying to get installed
Once you have identified the problem, you can solve with these methods:
If system is updating, then try to wait by executing sleep command in the --user-dataof your bootstrapping script
If your 1st installation of an service/application is blocking other one, then put a condition to wait/sleep until the first service is up and so on with rest of the services you are installing.
This was a common problem in Ubuntu 16.04 LTS as per, and you can find the same with the solution code https://forums.aws.amazon.com/thread.jspa?threadID=251663
A snippet of code from the referenced link:
until service codedeploy-agent status >/dev/null 2>&1; do
sleep 60
rm -f install
wget https://aws-codedeploy-us-west-2.s3.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
service codedeploy-agent restart
done
SSH into the instance before/while the UserData is running and check which process has acquired the lock:
$ lsof /var/lib/dpkg/lock-frontend
Also, try to enable CodeDeploy agent at the last step after performing all other steps in UserData, like:
https://gist.github.com/say8425/8344d19911dba20fab5538b85006bd31
So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).
Somehow my Vagrant project has disassociated itself from its VirtualBox VM, so that when I vagrant up Vagrant will import the base-box and create a new virtual machine.
Is there a way to re-associate the Vagrant project with the existing VM?
How does Vagrant internally associate a Vagrantfile with a VirtualBox VM directory?
For Vagrant 1.6.3 do the following:
1) In the directory where your Vagrantfile is located, run the command
VBoxManage list vms
You will have something like this:
"virtualMachine" {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
2) Go to the following path:
cd .vagrant/machines/default/virtualbox
3) Create a file called id with the ID of your VM xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
4) Save the file and run vagrant up
WARNING: The solution below works for Vagrant 1.0.x but not Vagrant 1.1+.
Vagrant uses the ".vagrant" file in the same directory as your "Vagrantfile" to track the UUID of your VM. This file will not exist if a VM does not exist. The format of the file is JSON. It looks like this if a single VM exists:
{
"active":{
"default":"02f8b71c-75c6-4f33-a161-0f46a0665ab6"
}
}
default is the name of the default virtual machine (if you're not using multi-VM setups).
If your VM has somehow become disassociated, what you can do is do VBoxManage list vms which will list every VM that VirtualBox knows about by its name and UUID. Then manually create a .vagrant file in the same directory as your Vagrantfile and fill in the contents properly.
Run vagrant status to ensure that Vagrant picked up the proper changes.
Note: This is not officially supported by Vagrant and Vagrant may change the format of .vagrant at any time. But this is valid as of Vagrant 0.9.7 and will be valid for Vagrant 1.0.
The solution with upper version is quite the same.
But first you need to launch the .vbox file by hand so that it appear in VBoxManage list vms
Then you can check the .vagrant/machines/default/virtualbox/id to check that the uuid is the right one.
Had the issue today, my .vagrant folder was missing and found that there was a few more steps than simply setting the id:
Set the id:
VBoxManage list vms
Find the id and set in {project-folder}/.vagrant/machines/default/virtualbox/id.
Note that default may be different if set in your Vagrantfile e.g. config.vm.define "someothername".
Stop the machine from provisioning:
Create a file named action_provision in the same dir as the id file, set it's contents to: 1.5:{id} replacing {id} with the id found in step 1.
Setup a new public/private key:
Vagrant uses a private key stored in .vagrant/machines/default/virtualbox/private_key to ssh into the machine. You'll need to generate a new one.
ssh-keygen -t rsa
name it private_key.
vagrant ssh then copy the private_key.pub into /home/vagrant/.ssh/authorized_keys.
Update with same problem today with Vagrant 1.7.4:
useful thread at https://github.com/mitchellh/vagrant/issues/1755
and specially with following commands:
For example, to pair box 'vip-quickstart_default_1431365185830_12124' to vagrant.
$ VBoxManage list
"vip-quickstart_default_1431365185830_12124" {50feafd3-74cd-40b5-a170-3c976348de27}
$ echo -n "50feafd3-74cd-40b5-a170-3c976348de27" > .vagrant/machines/default/virtualbox/id
For multi-VM setups, it would look like this:
{
"active":{
"web":"a1fc9ae4-5d43-49cb-be31-ab3c4f74745d",
"db":"13503bc5-76b8-4c26-95c4-32435b372212"
}
}
You can get the vm names from the Vagrantfile used to create those VMs. Look for this line:
config.vm.define :web do |web_config|
"web" is the name of the vm in this case.
This is modified from #Petecoop's answer.
Run vagrant halt if you haven't shut down the box yet.
Then list your virtualboxes: VBoxManage list vms
It'll list all of your virtualboxes. Identify the box you want to revert to and grab the id between the curly brackets: {}.
Then edit the project id file: sudo nano .vagrant/machines/default/virtualbox/id (from the project directory)
Replace it with the id you copied from the list of VBs.
Try vagrant reload.
If that doesn't work and gets hung on SSH authorization (where I stumbled), copy the insecure public key from the vagrant git. Replace the content of /.vagrant/machines/default/virtualbox/private_key. Backup the original of course: cp private_key private_key-bak.
Then run vagrant reload. It'll say it's identified the insecure key and create a new one.
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
You should be all set.
I'm using Vagrant 1.8.1 on OSX El Capitan
My vm was not shut correctly when my computer restarted, so when i tried vagrant up it was always creating new vm. No solutions here worked for me. But what did work was a variation of ingmmurillo's answer
So instead of creating .vagrant/machines/default/virtualbox/id based on the id from running VBoxManage list vms. I had to update the id in .vagrant/machines/local/virtual_box/id
I've got a one liner that essentially does this for me:
echo -n `VBoxManage list vms | head -n 1 | awk '{print substr($2, 2, length($2)-2)}'` > .vagrant/machines/local/virtualbox/id
This assumes the first box is the one i need to start from running VBoxManage list vms
In Vagrant 1.9.1:
I had a VM in Virtual Box named 'Ubuntu 16.04.1' so I packaged it as a vagrant box with:
vagrant package --base "Ubuntu 16.04.1"
responds with...
==> Ubuntu 16.04.1: Exporting VM...
==> Ubuntu 16.04.1: Compressing package to: blah blah/package.box
I'm on macos and found that removing the .locks on the boxes solved my problem.
For some reason
vagrant halt
did not remove these locks, and after restoring all my settings in .vagrant/machine/default/virtualbox using timemachine, removing the locks, the right machine booted up.
Only 1 minor problem remains, It booted into grub so I had to press enter once, don't know if this is staying, but I will find out soon enough.
I'm running vagrant 1.7.4 and virtualbox 5.0.2
for me deleting the
cd yourVagrantProject/.vagrant/machines/default/virtualbox/
rm id
worked.
I'm trying to change TCP/UDP of a virtual machine using VBoxManage setextradata.
Whenever I type the command:
sudo VBoxManage setextradata Windows_7 "VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestEmule_TCP/Protocol" TCP
I get the following error:
VirtualBox Command Line Management Interface Version 2.1.4
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
[!] FAILED calling a->virtualBox->FindMachine(Bstr(a->argv[0]), machine.asOutParam()) at line 3688!
[!] Primary RC = VBOX_E_OBJECT_NOT_FOUND (0x80BB0001) - Object corresponding to the supplied arguments does not exist
[!] Full error info present: true , basic error info present: true
[!] Result Code = VBOX_E_OBJECT_NOT_FOUND (0x80BB0001) - Object corresponding to the supplied arguments does not exist
[!] Text = Could not find a registered machine named 'Windows_7'
[!] Component = VirtualBox, Interface: IVirtualBox, {339abca2-f47a-4302-87f5-7bc324e6bbde}
[!] Callee = IVirtualBox, {339abca2-f47a-4302-87f5-7bc324e6bbde}
The virtual-machine has been created using GUI. Any idea?
$ cd /Users/marco/Library/VirtualBox/Machines/Windows_7
$ ls
Logs Windows_7.xml
Windows 7.xml.1.5-macosx.bak
$ sudo vboxmanage registervm Windows_7.xml
VirtualBox Command Line Management Interface Version 2.1.4
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
[!] FAILED calling a->virtualBox->OpenMachine(Bstr(a->argv[0]), machine.asOutParam()) at line 762!
[!] Primary RC = NS_ERROR_FAILURE (0x80004005) - Operation failed
[!] Full error info present: true , basic error info present: true
[!] Result Code = NS_ERROR_FAILURE (0x80004005) - Operation failed
[!] Text = Could not lock the settings file '/var/root/Library/VirtualBox/Windows_7.xml' (VERR_FILE_NOT_FOUND)
[!] Component = Machine, Interface: IMachine, {ea6fb7ea-1993-4642-b113-f29eb39e0df0}
[!] Callee = IVirtualBox, {339abca2-f47a-4302-87f5-7bc324e6bbde}
It fails because you are using sudo. VirtualBox is designed to be run by any user (in the vboxusers group), and sudo runs the command as the root user whose VirtualBox configuration is empty.
You can check that by typing:
sudo VBoxManage -nologo list vms # Should print only a newline
VBoxManage -nologo list vms # Detailled information about all your VMs
!! WINDOWS ONLY!!
If you are not on an admin account and are trying to modify your VM in a administrator cmd window, type these commands:
cd "C:\Program Files\Oracle\VirtualBox"
VboxManage registervm "C:\Users\Your Name Here\VirtualBox VMs\Your VM name here\Your VM name here.vbox"
Now run your virtual box modify commands or what ever else you are doing and it should work!
Not a direct answer, but just to put it out there for other people searching for it:
On Mac OS X, you can tell VirtualBox to load VMs from another user's home directory, provided the file permissions allow it, or if you are running VirtualBox as the root user using sudo (e.g. if you absolutely have to access your host's web server on port 80).
The way to do this is to set VBOX_USER_HOME appropriately, e.g.
VBOX_USER_HOME=/Users/the_other_user/Library/VirtualBox
If you want to run VBoxHeadless under root, use:
sudo VBOX_USER_HOME=/Users/your_user_id/Library/VirtualBox nohup \
VBoxHeadless -s "IE10 - Win7" </dev/null &>/dev/null &
I had a similar error message, whenever I used sudo to start VBoxSDL:
Error: machine with the given name not found!
Check if this VM has been corrupted and is now inaccessible.
And similar to ypocat's answer, I solved it for Ubuntu using a small script like this:
#!/bin/bash
export VBOX_USER_HOME=/home/username/.config/VirtualBox
VBoxSDL --startvm nameOfVM
You can use it whenever you need to start your VM as root.
SOLUTION_1:
Missing Virtual Technology might be the reason. For Intel systems they have Intel VT-x (for AMD they have AMD-V), so make sure it is enabled. You can enable it in the boot screen go to BIOS Setup in that look for System Configuration tab and enable Virtual Technology
.
SOLUTION_2:
Open terminal or cmd (Run as admin) for Windows, and run SC START VBOXDRV. If it says the service is already running then try SC STOP VBOXDRV and then SC START VBOXDRV.
RealScar solution helped me in addition to other commands and it worked in Ubuntu 20.04, too.
I had the problem initially indicated (VirtualBox unable to find a registered machine). I was getting no results after typing sudo VBoxManage -nologo list vms, so I manually registered the existing machine typing sudo vboxmanage registervm /home/user/VirtualBox\ VMs/machinenamefolder/machinename.vbox. It worked great.
Note: I was creating a Cuckoo Malware Sandbox Analysis.
REASON: In the above case its uuid-mismatch (w.r.t. what is pointed to what is generated). Typically case of either improper edit of the vbox config files or accidentally deleted images/configs associated.
SOLUTION:
As FIRST Step: Correct the UUID-Mappings;
So, for example, the uuids can be regenerated to correct the mappings. (1)
Or otherwise, if attempted to edit .vbox/.vmdk/VirtualBox.xml files, the mappings should be corrected. (2)
As SECOND Step: Re-register the virtual-machine to the UI.
For example, if can't open the vm's from vbox gui or terminal. Remove the "inaccessible" entry from the GUI first". And then, open the folder of virtual machine and open file <machine name>.vbox with virtualbox and it will get registered. Provided, the uuid-mappings have been already resolved. Else follow the errors, while you attempt to register to make necessary changes in .vbox file.