Running pre-import customizations in Vagrant - virtualbox

I need to do some customization on created VM either before importing it or just before running it first time. For instance, I need to clear stale NAT port forwarding rules that tend to be left after the box with the same name, remove some disk controllers (reattach existing disks to IDE controller instead of SATA for compatiblity with older OS revisions that do not understand SATA, etc).
There are pre-boot and pre-import events in Vagrant code, but I wonder if there's any way of running some virtualbox/vagrant commands before booting created vm?

Yes, for running VBoxManage commands, see the "VBoxManage Customizations" chapter in the docs. The commands are by default run on pre-boot phase, but you can also specify the phase as a first argument:
Vagrant.configure("2") do |config|
# ...
config.vm.provider "virtualbox" do |v|
v.customize "pre-boot", ["modifyvm", :id, "--cpus", 2]
end
end
But I think there problem is that you don't have an easy and reliable way to get the disk image path.

Related

Vagrantfile with multiple vm and providers

I am trying to write a Vagrantfile with multiple machines backed up by multiple providers. I specifically want to be able to spawn more than one of those machines in one go. Basically I want to run the command:
vagrant up vb_vm aws_vm
I am aware of the --provider flag, but this would apply to all machines being spawned, so not applicable in my case.
This is my (very trimmed down but still valid) Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.define 'vb_vm' do |vb_vm|
vb_vm.vm.box='ubuntu/trusty64' # from hashicorp
vb_vm.vm.provider :virtualbox do |v|
end
end
config.vm.define 'aws_vm' do |aws_vm|
aws_vm.vm.box = "aws/dummy"
aws_vm.vm.box_url = 'https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box'
aws_vm.vm.provider :aws do |a, override|
a.access_key_id = 'something'
a.secret_access_key = 'something'
a.ami='something'
end
end
end
A vagrant box list shows that the boxes used for each definitions are of the right type:
aws/dummy (aws, 0)
ubuntu/trusty64 (virtualbox, 20150928.0.0)
But a vagrant status gives me (note that I do have the lxc plugin available, which became the default)
Current machine states:
aws_vm not created (aws)
vb_vm not created (lxc)
So this shows that spawning multiple machine with multiple provider is indeed possible, but the choice of provider is wrong.
I am aware of the tricks to set up the default provider, but this only makes things worse (virtualbox used everywhere, aws not used at all...)
I am aware of old stackoverflow questions as well, but they are related to a much older version of Vagrant.
So the question is: how do I make sure that each box defined uses its proper provider?
The trick will be to create the VM with their own provider.
example: I've defined a quick Vagrantfile (minimized) with boxes for each provider
Vagrant.configure(2) do |config|
config.vm.define "db" do |db|
db.vm.box = "..."
db.vm.hostname = "db"
end
config.vm.define "app", primary: true do |app|
app.vm.box = "..."
app.vm.hostname = "app"
app.ssh.forward_agent = true
app.ssh.forward_x11 = true
app.vm.provider "vmware_fusion" do |vm|
vm.vmx["memsize"] = "4096"
end
end
end
I create each VM separately
fhenri#machine:~/project/examples/vagrant/multimachine$ vagrant up db --provider=virtualbox
Bringing machine 'db' up with 'virtualbox' provider...
....
fhenri#machine:~/project/examples/vagrant/multimachine$ vagrant up app
Bringing machine 'app' up with 'vmware_fusion' provider...
....
then I halt everything and next time I do vagrant up
fhenri#machine:~/project/examples/vagrant/multimachine$ vagrant up
Bringing machine 'db' up with 'virtualbox' provider...
Bringing machine 'app' up with 'vmware_fusion' provider...
and status looks good
fhenri#machine:~/project/examples/vagrant/multimachine$ vagrant status
Current machine states:
db running (virtualbox)
app running (vmware_fusion)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

Vagrant managed docker container doesn't start

I've been trying to write a vagrant file to start up my docker container to run a small web app I've been writing. However when I try use vagrant up I eventually get an error saying
The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
I'm very new to vagrant so I'm not really sure what the best way to try and fix the problem is.
My vagrant file contains
Vagrant.configure("2") do |config|
config.vm.synced_folder "thelibrary", "/thelibrary"
config.vm.provider "docker" do |d|
d.image = "django-dev"
d.has_ssh = false
d.ports = ["8000:8000"]
d.cmd = ["python", "/thelibrary/manage.py", "runserver", "0.0.0.0:8000"]
end
end
I'm not sure why it says the command doesn't keep running. I can run the docker container with the same command and it will spin up my django app without any issues.
I had the same problem but adding option
d.create_args = ["-i"]
solved my problem
I spent the day try to get the docker machine running.. finally got it working. Here is what I have in my vangrantfile, hope this can at least get you started:
config.vm.provider :docker do |d|
d.image = "paintedfox/postgresql"
d.name = "db"
d.cmd = ["/sbin/my_init", "--enable-insecure-key"]
end
vagrant status returns me this:
Current machine states:
dev running (docker)
Another solution that you can try is to remove all your existing images and start fresh, it could be that your image is broken.

why do you need to be in the specified directory when creating multi-boxes in vagrant

I'm trying to create multiple-boxes to be loaded by vagrant when writing
vagrant up kali
vagrant up metasploitable2
The Config I have set-up are
Within the Kali VagrantFile
Vagrant.configure("1") do |config|
config.vm.define "kali" do |kali|
kali.vm.box = "Kali"
end
end
Within the Metasploitable2
Vagrant.configure("1") do |config|
config.vm.define "metasploitable2" do |metasploitable2|
metasploitable2.vm.box = "metasploitable2"
end
end
If I browse to the directory where the .vmdk and Vagrantfile are located and say
vagrant up kali
it creates the kali image, however if i'm not in the directory it won't load the VM.
with an error:
The machine with the name 'kali' was not found configured for
this Vagrant environment.
I'm going to presume this is because it's not being able to read the configuration file, but how can i make this globally because I thought you weren't supposed to modify the 'global' vagrantfile at all.
Well, Vagrant has to find the Vagrantfile to read it, doesn't it? =)
So you either have to be in the same directory or any subdirectory below it. Or you can set VAGRANT_CWD environment variable to point to the directory. See the "Lookup Path" section in the Vagrantfile documentation for more information.
You can of course make wrapper script or other shortcuts if you need to use that often.
Btw, you might want to upgrade your Vagrantfiles to use V2 configuration format to use all new features of Vagrant 1.1+.

vboxmanage.exe error could not rename the directory

I am using VirtualBox 4.2.18 and Vagrant 1.3.3 on Window 7. I have done a vagrant box add
vagrant box add MyBox http://ergonlogic.com/files/boxes/debian-LAMP-current.box
But, when I get to the step of vagrant up I get the following error: "vboxmanage.exe error could not rename the directory..."
Any help would be appreciated.
Thanks,
Derek
I tried:
vagrant destroy -f
manually deleting the virtualboxes in their directory
restarting my machine
reinstalling both vagrant and virtualbox
downgrading vagrant and virtualbox
running with sudo
and nothing worked. The only thing that worked for me was opening Virtualbox interface and going to Preferences and changing the Default Machine Folder from VirtualBox VMs to just VMs
Wasted about 4 hours of my time on that problem. Hopefully someone with the same problem finds this post.
I went to the Directory
VirtualBox VMs
And deleted everything inside. Then I just did vagrant up, and it worked.
I was finally able to figure this out. Turns out it is useful to know how to set two specific directory paths for VirtualBox. This was particularly useful because I run my machine under an account that does not have administrative privileges. So I needed to get VirtualBox to used directory paths which I had access to security-wise. The first is the VBOX_USER_HOME environment variable which can be done within the System Properties/Environment Variables on Windows 7. In this way the VBOX_USER_HOME variable will control where the .VirtualBox directory goes. Secondly, set where the *.vbox files goes which is typically a directory called VirtualBox VMs. To set this path open the VirtualBox GUI and go to File  Preference and set the path at the Default Machine Folder input box.
Hopefully this info will help others.
Derek
Thing that worked for me:
1) I had to manually delete C:\Users\My_name\VirtualBox VMs\machine_name folder.
2) To prevent this from happening again, before 'vagrant destroy' command I always stop current machine with 'vagrant suspend'.
I just removed every sub folder under this folder and it worked
Don't destroy your vagrant machine! This is a last option.
Write in you console:
VBoxManage list vms
Copy id of your machine, something like:
7fca07b2-65c6-420e-84b5-b958c15449a1
Open your vagrant machine id file, something like:
.vagrant/machines/default/virtualbox/id
Replace with id you just copied and do:
Vagrant up
This allways works for me. If not, only as last option you can try: vagrant destroy -f
On Windows 10 using VirtualBox v6.1.26 I encountered the same problem.
Here is how I could re-create the VM after a broken vagrant destroy
Try:
Run vagrant destroy -f
Check the available machines with vagrant global-status --prune
Find the VMs folder in C:/Users/your_username/.VirtualBoxMachines and try deleting the one with the name of your machine using the file explorer manually
If you cannot delete the folder (some processes has open files with in it), try restarting your computer and delete then
Now it should work again with vagrnat up
This worked for me!
That error means there is other VM in Virtual Machine with the same name as the one you used for this VM. So go back to the folder of that VM you run previously and destroy it with "vagrant destroy -f". Then try again running this VM.
Vagrant
Working with Vagrant I had a similar error. This was due to naming conflicts. What solved it for me was to remove the name of the instance from the Vagrantfile.
vb.customize ["modifyvm", :id,
"--name", "oracle",
"--memory", "512",
"--natdnshostresolver1", "on"]
Change that to
vb.customize ["modifyvm", :id,
"--memory", "512",
"--natdnshostresolver1", "on"]
You Just need find your folder called VirtualBox VMs
In that folder should see your machines
And rename what folder you want, and run:
vagrant up
So you have run it successfully.
vagrant destroy -f
find the folder VirtualBox VMs --> delete the machine you want to
rename
Run vagrant up in your project root
This worked for me!
i don't know how it works but i just kill the process of VB like the Following image and i run 'vagrant reload'
In windows OS, if none of these solution works, try to run the command in PowerShell as Administrator.
Gentleman and ladies oh no. Just go to the vagrant file change the file
vb.customize ["modifyvm", :id,
"--name", "oracle",
"--memory", "512",
"--natdnshostresolver1", "on"]
change the name variable as it conflicts with another 'installed' or failed to 'installed' vagrantbox. the new Vagrantfile should be like:
vb.customize ["modifyvm", :id,
"--name", "oracle2",
"--memory", "512",
"--natdnshostresolver1", "on"]
On Ubuntu 20.04
First, run
vagrant destroy
Go to this directory
/home/your_username/VirtualBox VMs
This step deletes all your VMs:
Delete all files and directories in that directory like so
rm -rf *
And then run
vagrant up

Vagrant Rsync Error before provisioning

So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).