I have a specific Vagrantfile for a lucid32 box that installs some pretty standard packages (PHP/MySQL/Apache), does some specific port forwarding and so on, I set it up, checkout a project from git and change some server configurations, I want to package this box for other developers on the team to use, so I use the command:
vagrant package boxname --output test.box --vagrantfile Vagrantfile
I get a test.box file, it notes in the packaging that it's adding my specific Vagrantfile, then I run:
vagrant box add boxname test.box
It appears in vagrant box list just fine, now when I create some test directory, and do vagrant init boxname followed by vagrant up
and then it provisions that box with the files from the checkout and everything, but does not setup the proper port forwarding,
and is in fact not using my Vagrantfile I packaged it with at all, but generated a new default one in that directory.
I noticed in ~/.vagrant.d/boxes/boxname it has the default Vagrantfile, along with the one I packaged in includes/_Vagrantfile
Is there any way I can get the specific Vagrantfile to be the one generated when I do vagrant init boxname?
This is actually all working as intended. Vagrant loads multiple Vagrantfiles (not just the one from vagrant init) when loading up, and the Vagrantfile packaged with a box is one of those. For more information, read about the "Vagrantfile Load Order" here: https://www.vagrantup.com/docs/vagrantfile/#load-order-and-merging
In a nutshell: Vagrant loads multiple Vagrantfiles in a specific order, and merges their configurations. The box Vagrantfile is loaded as part of this process prior to the root Vagrantfile (the one from vagrant init).
There is no way currently to make a skeleton Vagrantfile that is generated with vagrant init.
The configuration you put in the packaged Vagrantfile, such as port forwards, should be loading in properly, since when merging configurations, it should just append all the port forwards.
If this isn't happening, you should report a bug! But for the sake of a SO answer, this is how things are supposed to behave.
This code worked for me
vagrant package --output vagrant_example.box
Related
Within Google Container OS, I would like to use it as my cloud development environment. How would I run the docker command from the toolbox? Do I need to add the docker.sock as a bind mount? I need to be able to run docker (and docker-compose) to run my development environment.
Google Container OS images come with docker already installed and configured, so you will be able to use the docker command from the command line without any prior configuration if you create a virtual machine from one of these images, and SSH into the machine.
As for docker-compose, this doesn't come pre-installed. However, you can install this, and other relevant tools/programs you require by making use of the toolbox you mentioned which provides a shell (including a package manager)in a Debian chroot-like environment (here you automatically gain root privileges).
You can install docker-compose by following these steps:
1) If you havn't already, enter the toolbox environment by running /usr/bin/toolbox
2) Check the latest version of docker-compose here.
3) You can run the following to retrieve and install docker-compose on the machine (substitute the docker-compose version number for the latest version you retrieved in step 2):
curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
You've probably found at this point that although you can now run the freshly installed docker-compose command within the toolbox, you can't run the docker command. This is because by default the toolbox environment doesn't have access to all paths with the rootfs and that the filesystem available doesn't correspond between both environments.
It may be possible to remedy this by exiting out of the toolbox shell, and then edit the /etc/default/toolbox file which allows you to configure the toolbox settings. This would allow you to provide access to the docker binary file in the standard environment by following these steps:
1) Ensure you are no longer in the toolbox shell, then run command which docker. You will see something similar to /usr/bin/docker.
2) Open file /etc/default/toolbox
3) The TOOLBOX_BIND line specifies the paths from rootfs to be made available inside the toolbox environment. To ensure docker is available inside the toolbox environment, you could try adding an entry to the TOOLBOX_BIND section, for example --bind=/usr/bin/docker:/usr/bin/docker.
However, I've found that even though it's possible to edit the /etc/default/toolbox to make the docker binary file available in the toolbox environment, when certain docker commands are run in the toolbox environment, additional errors are generated as the docker version that comes pre-installed on the machine is configured to use certain configuration files and directories and although it may be possible edit the /etc/default/toolbox file and make all of the required locations accessible from within the toolbox environment, it may be simpler to install docker within the toolbox by following the instructions for installing docker on debian found here.
You would then be able, to issue both the docker and docker-compose commands from within toolbox.
Alternatively, it's possible to simply install docker and docker-compose on a standard VM (i.e. without necessarily using a Google Container OS machine type) although the suitability of this depends on your use case.
I'm a newer for the hyperledger and just studying it by following the tutorials on http://hyperledger-fabric.readthedocs.io. I am trying to build the first network using "first-network" in the fabric-samples. The ./byfn -m generate is OK. But after typing ./byfn -m up, I meet
/bin/bash: ./scripts/script.sh: No such file or directory
error and the process hangs.
What is going wrong?
PS: The OS is Windows 10.
Check to see if you have a local firewall enabled. Depending on your docker configuration, a firewall may prohibit the docker daemon from accessing share drives as specified in docker setup (windows).
Restart the Docker daemon after applying local firewall changes.
I was facing the same issue and could resolve it.
The shared network drive needs to be working for any directory on the local machine to be identified from the container.
Docker for example has the "Shared drive" usually c:\ under which all your byfn.sh paths shall be present. Second condition is you need to be running the byfn.sh script with the same user who was authenticated to share the drives on the container. Your password change on the windows environment could break the already existing shared drives with the containers, hence creating problems in starting them.
Follow these steps :
In your docker terminal check the path $HOME. Type the command echo $HOME.
Make sure that your fabric-samples folder is the same path as of the variable $HOME.
Follow the steps for generating your first network.
or try the below solution.
Follow these steps :
Go to settings of docker.
Click on reset credentials.
Now check if the shared drives include the required drives or not.
If not, then include them apply your changes and restart your docker and your bash where you were trying to start your network.
I know the question is old but i have faced the similar issue so i did the following
./byfn.sh -m generate
./byfn.sh -m up
i was missing .sh in both commands.
Assuming the django project repository is on github and I have had no interaction with it previously.
So: I cd to a new directory on my computer.
I clone the repository.
If the django project is using postgresql, do I have to have postgresql installed on my local machine?
Do I have to be running in a virtual environment to use a specific interpreter?
Thanks Peter
Database
You can actually use another database on your local copy if you choose, although in general it's a good idea to use the same database locally.
If you're going to be using postgres locally, yes you'll need to install it and then create your local database. Once you have your local database setup, you'll need to change some config values of your DATABASES property in settings.
Packages
Your project will also have some dependencies which should be listed in a requirements.txt file at the root directory. If it is not, you'll need to find out which packages need to be installed via pip freeze in the production console.
Virtual Env
You should use a virtual environment, but it's not completely necessary to get your project up and running. Virtualenvs allow you to have different installs and runtimes for different projects.
Other
Every project is different, and there will most likely be some other things that pop up. However, this should get you going in the right direction.
I've installed a Vagrant + Virtualbox using Chef (+library chef). When I do vagrant up first time, cookbooks get loaded correctly. However, when I do provision afterwards (be it vagrant provision, vagrant reload --provision or vagrant up --provisionI get this error:
Shared folders that Chef requires are missing on the virtual machine.
This is usually due to configuration changing after already booting the
machine. The fix is to run a `vagrant reload` so that the proper shared
folders will be prepared and mounted on the VM.
I searched everywhere and the only solution given is to do vagrant reload --provision, this worked up up to Vagrant 1.3.1.
it seems like there is a bug with sync folders, this clears the cache and fixed it for me. (from your project directory)
rm .vagrant/machines/default/virtualbox/synced_folders
vagrant reload --provision
https://github.com/mitchellh/vagrant/issues/5199
EDIT: this should be fixed in vagrant 1.7.4
That's a fairly common issue with the Vagrant plugins for both Berkshelf and Librarian. Just get used to running that command.
The way to avoid it is to use something like Test-Kitchen instead of the Vagrant plugins. That isn't a drop-in replacement though.
I'm using Vagrant to deploy chef scripts to an AWS server (and it mostly works awesome). I have set up a local rsync in my Vagrantfile to mirror my dev directory onto the server.
config.vm.synced_folder "../geoevents", "/vagrant/geoevents-repo"
And this syncs find on 'vagrant provision'. I'm wondering if there is an easier way that I can have vagrant only trigger that rsync, or to control how often rsync occurs?
Or, should I not be using rsync, but instead mount a shared file system?
Vagrant CLI now has two new commands, vagrant rsync and vagrant rsync-auto which can do the job.
Command: vagrant rsync
This command forces a re-sync of any rsync synced folders.
Note that if you change any settings within the rsync synced folders such as exclude paths, you will need to vagrant reload before this command will pick up those changes.
https://www.vagrantup.com/docs/cli/rsync.html
https://www.vagrantup.com/docs/cli/rsync-auto.html
https://www.vagrantup.com/docs/cli/non-primary.html
Currently, you can fit your needs with the following plugin:
https://github.com/cromulus/vagrant-rsync
By the way, most of the plugin features will be available in Vagrant 1.5 (currently in development).
The vagrant-rsync is deprecated as of Vagrant 1.5. One solution out there is vagrant-unison. You may also check out this discussion. What should also work is a vagrant reload.