I use Vagrant with several VMs which are dependent.
For example A has a NFS server, B is a NFS client of A, etc.
When I make a vagrant up, A is launch, then B, etc.
But when I make a vagrant halt, A is shotdown, then B fails to stop properly because it tries to umount a NFS disk which is offline.
Is their a way to reverse shutdown order?
Are you using Vagrant Multi-Machines option? With it you can specify two machines in the same Vagrantfile and when using vagrant upand vagrant halt will affect both.
Also take a look into vagrant load order.
Related
I recently set up a Trellis/Bedrock/Sage environment. I like it so far but have been running into the issue that if I step away from my computer, I can’t reconnect to my local production environment and have to start up my wordpress install from scratch. Is there a way to “save” a vagrant box so I can close my computer and not have to vagrant destroy, then vagrant up each time?
Thanks,
You could probably Save State directly in VirtualBox to accomplish what you want:
Also, I was able to find this comment to help answer your question here which also may do the trick:
In order to keep the same VM around and not loose its state, use the
vagrant suspend and vagrant resume commands. This will make your VM
survive a host machine reboot with its state intact.
https://serverfault.com/users/210885/
When I connect to EC2 instance via Mobaxterm, after some period of time my jupyter notebook's kernel loses connection.
And some highly time-consuming operations /(Currently running tasks) are required to be re-performed again and again and are never-ending (This repeats each and every time).
I'm closing the notebook and restarting, so I can gain a connection to the kernel because it doesn't reconnect and I had to go through the process again and again when it dies eventually.
It also shows SSL error, wrong version number sometimes before disconnecting.
I have also faced a similar problem. I solved it with the help of 'tmux'.
I followed these steps:
I installed 'tmux' in my machine in the AWS instance.
[Actually, it came preinstalled with the AMI I had been using on the EC2 instance.]
I created a 'tmux' session simply by entering the command: tmux
Then I ran necessary commands to run the Jupyter server or Jupyter notebook
To close the terminal, I used this command: (i) ctrl + b, (ii) d
[Please notice, the session will continue running on the EC2 instance until you close the instance or close the jupyter server or the jupyter notebook].
To connect to the session again, I used the command: tmux attach
To finally kill the 'tmux' session when I am done, I used the command: tmux kill-session
Just use nohup. This should be the builtin tool in all Linux machines.
So you should do: nohup jupyter notebook > output.txt
And then you can safely terminate the console session without worrying about killing the notebook.
I have a Ubuntu 14.04 host headless Server.
Using root user, I vagrant up a VM that is using VirtualBox.
Inside this VM, is a Django Python 3 app.
Every time I vagrant up and vagrant ssh this VM, I need to run sudo service gunicorn start.
If I exit from the vagrant ssh, and then switch to another user, the app dies.
How do I maintain this Django app running from the VM permanently?
If the host machine has to reboot for whatever reason, how can the Django app automatically run itself?
In summary:
how to allow vagrant and the gunicorn inside the VM run for a very long time while I switch between users in the host OS?
Is there a way to automatically revive the vagrant and the gunicorn inside, whenever the host OS is rebooted?
Use:
sudo service gunicorn start &
The & sign will make your command to run on a different process then the terminal one, so you can close the terminal without closing the gunicorn.
By the way, this is not a vagrant related, it happens on all linux-like terminals.
For your second question, you need to use something like supervisor to handle this for you.
I've installed a Vagrant + Virtualbox using Chef (+library chef). When I do vagrant up first time, cookbooks get loaded correctly. However, when I do provision afterwards (be it vagrant provision, vagrant reload --provision or vagrant up --provisionI get this error:
Shared folders that Chef requires are missing on the virtual machine.
This is usually due to configuration changing after already booting the
machine. The fix is to run a `vagrant reload` so that the proper shared
folders will be prepared and mounted on the VM.
I searched everywhere and the only solution given is to do vagrant reload --provision, this worked up up to Vagrant 1.3.1.
it seems like there is a bug with sync folders, this clears the cache and fixed it for me. (from your project directory)
rm .vagrant/machines/default/virtualbox/synced_folders
vagrant reload --provision
https://github.com/mitchellh/vagrant/issues/5199
EDIT: this should be fixed in vagrant 1.7.4
That's a fairly common issue with the Vagrant plugins for both Berkshelf and Librarian. Just get used to running that command.
The way to avoid it is to use something like Test-Kitchen instead of the Vagrant plugins. That isn't a drop-in replacement though.
I'm running a virtual machine with Supervisord to start and maintain several important background processes. I create the virtual machine with vagrant and virtualbox, and provision it with puppet. When the machine starts up supervisord grabs all the .conf files in /etc/supervisor/conf.d and tries to run them. Unfortunately, when I run
Vagrant up
supervisord starts trying to run the files in conf.d immediately, before the synced folders are shared. So starting some background processes like Xvfb runs just fine, but starting my stat tracker, which resides within the synced folder, wont be possible. In fact, I see in the supervisord logs multiple attempts to start the process, complaining that it can't find the file, and finally giving up. Then, once the machine is fully running I can SSH in and run the exact same command in the .conf file and start the process myself.
I have created an intermediary script to loop continuously, waiting for the synced folder to become available, and then starting the processes I want. But in this case, supervisor does not have a way to make sure the process remains running, and it feels like a hack.
Is there a cleaner way to do this? Maybe from within puppet or vagrant?
After some more googling I found this which solved my problem: http://razius.com/articles/launching-services-after-vagrant-mount/