I just cannot get my vagrant box up and running. This vagrant error makes me sick:
$ vagrant up
...
==> default: Mounting shared folders...
default: /vagrant => /path/to/folder/of/Vagrantfile
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3` /vagrant /vagrant
mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant` /vagrant /vagrant
...
$ vagrant reload
==> default: Attempting graceful shutdown of VM...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["hostonlyif", "create"]
Stderr: 0%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterface, interface IHostNetworkInterface
VBoxManage: error: Context: "int handleCreate(HandlerArg*, int, int*)" at line 66 of file VBoxManageHostonly.cpp
Things I've tried:
sudo launchctl load /Library/LaunchDaemons/org.virtualbox.startup.plist
destroying the box and trying again
restarting
Does anyone else have this problem and got a working solution?
vagrant version is 1.5.4
virtualbox version is 4.3.10 r93012
Content of my Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.network :private_network, ip: "192.168.56.101" #eth1
# find your correkt interface name by: VBoxManage list bridgedifs"
config.vm.network :public_network, bridge: "en1: Monitor-Ethernet" #eth2
config.ssh.forward_agent = true
config.vm.provider :virtualbox do |v|
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--memory", 2048]
v.customize ["modifyvm", :id, "--ioapic", "on"]
v.customize ["modifyvm", :id, "--cpus", "2"]
v.customize ["modifyvm", :id, "--name", "fsc_box"]
end
nfs_setting = RUBY_PLATFORM =~ /darwin/ || RUBY_PLATFORM =~ /linux/
config.vm.synced_folder "../../", "/var/www/symfony/current/", id: "symfony" , :nfs => nfs_setting
config.vm.provision :shell, :inline =>
"if [[ ! -f /apt-get-run ]]; then sudo apt-get update && sudo touch /apt-get-run; fi"
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "site/manifests"
puppet.module_path = "."
puppet.options = "--verbose --debug"
end
end
Seems like I fixed it via re-installing VirtualBox.
Related
My Codebuild project that it creates AMI by packer by ansible provisioner.
This packer settings success in my local environment and Amazon linux2 ec2 environment. However, when I use AWS Codebuild with aws/codebuild/amazonlinux2-x86_64-standard:1.0 image and it fails.
I already tried this settings remote_tmp = /tmp or remote_tmp = /tmp/.ansible-${USER}/tmp but did not work.
Authentication or permission failure, did not have permissions on the remote directory
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- python --version
- pip --version
- curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.4.3/packer_1.4.3_linux_amd64.zip && unzip packer.zip
- ./packer version
- pip install --user ansible==2.8.5
- ansible --version
- echo 'Validate packer json'
- ./packer validate packer.json
build:
commands:
- ./packer build -color=false packer.json | tee build.log
{
"builders": [{
"type": "amazon-ebs",
"region": "ap-northeast-1",
"ami_regions": "ap-northeast-1",
"source_ami": "ami-0ff21806645c5e492",
"instance_type": "t2.micro",
"ssh_username": "ec2-user",
"ami_name": "packer-quick-start {{timestamp}}",
"ami_description": "created by packer at {{timestamp}}",
"ebs_optimized": false,
"tags": {
"OS_Version": "Amazon Linux AMI 2018.03",
"timestamp": "{{timestamp}}",
"isotime": "{{isotime \"2006-01-02 03:04:05\"}}"
},
"disable_stop_instance": false
}],
"provisioners": [
{
"type" : "ansible",
"extra_arguments": [
"-vvv"
],
"playbook_file" : "ansible/main.yaml"
}
]
}
==> amazon-ebs: Prevalidating AMI Name: packer-quick-start 1569943272
amazon-ebs: Found Image ID: ami-0ff21806645c5e492
==> amazon-ebs: Creating temporary keypair: packer_5d936ee8-541f-5c9a-6955-9672526afc1a
==> amazon-ebs: Creating temporary security group for this instance: packer_5d936ef1-6546-d9d0-60ff-2dc4c011036f
==> amazon-ebs: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Name": "Packer Builder"
amazon-ebs: Instance ID: i-04b00db56a8b3b6d0
==> amazon-ebs: Waiting for instance (i-04b00db56a8b3b6d0) to become ready...
==> amazon-ebs: Using ssh communicator to connect: 3.112.61.8
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Provisioning with Ansible...
==> amazon-ebs: Executing Ansible: ansible-playbook --extra-vars packer_build_name=amazon-ebs packer_builder_type=amazon-ebs -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible244097143 /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml -e ansible_ssh_private_key_file=/tmp/ansible-key242793848 -vvv
amazon-ebs: ansible-playbook 2.8.5
amazon-ebs: config file = /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible.cfg
amazon-ebs: configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
amazon-ebs: ansible python module location = /root/.local/lib/python3.7/site-packages/ansible
amazon-ebs: executable location = /root/.local/bin/ansible-playbook
amazon-ebs: python version = 3.7.4 (default, Sep 20 2019, 22:55:10) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
amazon-ebs: Using /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible.cfg as config file
amazon-ebs: host_list declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: script declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: auto declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: Parsed /tmp/packer-provisioner-ansible244097143 inventory source with ini plugin
amazon-ebs:
amazon-ebs: PLAYBOOK: main.yaml ************************************************************
amazon-ebs: 1 plays in /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml
amazon-ebs:
amazon-ebs: PLAY [all] *********************************************************************
amazon-ebs: META: ran handlers
amazon-ebs:
amazon-ebs: TASK [be sure httpd is installed] **********************************************
amazon-ebs: task path: /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml:6
amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: root
amazon-ebs: <127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=35595 -o 'IdentityFile="/tmp/ansible-key242793848"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/02aaab1733 127.0.0.1 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
amazon-ebs: <127.0.0.1> (0, b'/root\n', b"Warning: Permanently added '[127.0.0.1]:35595' (RSA) to the list of known hosts.\r\n")
amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: root
amazon-ebs: <127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=35595 -o 'IdentityFile="/tmp/ansible-key242793848"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/02aaab1733 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `" && echo ansible-tmp-1569943320.4544108-49329379039882="` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `" ) && sleep 0'"'"''
amazon-ebs: <127.0.0.1> (1, b'', b'mkdir: cannot create directory \xe2\x80\x98/root\xe2\x80\x99: Permission denied\n')
amazon-ebs: <127.0.0.1> Failed to connect to the host via ssh: mkdir: cannot create directory ‘/root’: Permission denied
amazon-ebs: fatal: [default]: UNREACHABLE! => {
amazon-ebs: "changed": false,
amazon-ebs: "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `\" && echo ansible-tmp-1569943320.4544108-49329379039882=\"` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `\" ), exited with result 1",
amazon-ebs: "unreachable": true
amazon-ebs: }
amazon-ebs:
amazon-ebs: PLAY RECAP *********************************************************************
amazon-ebs: default : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
amazon-ebs:
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
I know it fails because it tried to mkdir /root and Permission denied.
But don't know why it tried to mkdir /root. How can I change this behavior?
I solved and it was super simple cause.
Because AWS Codebuild builds by the root user, ansible makes a connection by the root user. I just wrote like this and solved it.
"provisioners": [
{
"type" : "ansible",
"user": "ec2-user",
"playbook_file" : "ansible/main.yaml"
}
]
My ansible file is simple for testing.
---
- hosts: all
become: yes
gather_facts: no
tasks:
- name: be sure httpd is installed
yum: name=httpd state=installed
- name: be sure httpd is running and enabled
service: name=httpd state=started enabled=yes
I have set up a list of ec2 instances to which I can log into using a ssh.cfg file that performs proxying & agent forwarding via the publicly accessible NAT instance as follows:
ssh -F ssh.cfg admin#private_instance_ip
ssh.cfg has the following structure:
Host 10.40.*
ProxyCommand ssh -W %h:%p ec2-user#nat_instance_ip -o StrictHostKeyChecking=no
Host *
ControlMaster auto
ControlPath ~/.ssh/mux-%r#%h:%p
ControlPersist 60m
ForwardAgent yes
My network is of course 10.40.*.
I want to test ansible connectivity to such hosts (defined in inventoryfile) as follows:
ansible -i inventoryfile -m ping all
Is there a way to do this?
edit: an example of how the ansible ping fails:
ansible -i inventoryfile -m ping 10.40.187.22
10.40.187.22 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n",
"unreachable": true
}
However:
ssh -F ssh.cfg admin#10.40.187.22
Last login: Fri Oct 27 07:36:05 2017 from private_ip_of_NAT
admin#ip-10.40.187.22:~$ exit
Here is the ansible.cfg
[defaults]
nocows = 1
callback_whitelist = profile_tasks
host_key_checking = False
retry_files_enabled = False
gathering = explicit
forks=50
vault_password_file = .vault
[ssh_connection]
ssh_args = -F ssh.cfg
pipelining = True
edit2: when specifying user in the ping command, it still gives me an error:
ansible -i inventoryfile -m ping 10.40.187.22 -u admin
10.40.187.22 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n",
"unreachable": true
}
It turns out that private IPs won't work in ansible ping when the later is proxied via e.g. a public nat instance.
The inventory file is dynamic and based on aws tags, i.e.
[ec2_hosts:children]
tag_Name_my_srv1
tag_Name_my_srv2
so the following command succeeds:
ansible -i inventoryfile -m ping tag_Name_my_srv1 -u admin
10.40.187.22 | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}
Trying to run a simple gather_ facts playbook using Ansible. I can connect via SSH using the user credentials with no issues but for a reason I cannot get my head around the playbook fails with the following message:
2017-10-07 22:57:44,248 ncclient.transport.ssh Unknown exception: cannot import name aead
OS: Ubuntu (Ubuntu 16.04.3 LTS)
Destination Router: Virtualbox JunOS Olive [12.1R1.9]
Ansible Version: 2.4.0.0
hosts:
[all:vars]
ansible_python_interpreter=/usr/bin/python
ansible_connection = local
[junos]
lab.r1
Playbook:
---
- hosts: junos
gather_facts: no
tasks:
- name: obtain login credentials
include_vars: ../auth/secrets.yml
- name: Checking NETCONF connectivity
wait_for: host={{ inventory_hostname }} port=830 timeout=5
- name: Gather Facts
junos_facts:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
register: junos
- name: version
debug: msg="{{ junos.facts.version }}"
Playbook output:
$ ansible-playbook -vvvv junos-get_facts.yml
ansible-playbook 2.4.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/local/lib/python2.7/dist-packages/ansible/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: junos-get_facts.yml ******************************************************************************************************************
1 plays in junos-get_facts.yml
PLAY [junos] ***********************************************************************************************************************************
META: ran handlers
TASK [obtain login credentials] ****************************************************************************************************************
task path: /usr/local/share/ansible/junos/junos-get_facts.yml:6
Trying secret FileVaultSecret(filename='/usr/local/share/ansible/auth/vault/vault_pass.py') for vault_id=default
ok: [lab.r1] => {
"ansible_facts": {
"creds": {
"password": "*******",
"username": "ansible"
}
},
"ansible_included_var_files": [
"/usr/local/share/ansible/junos/../auth/secrets.yml"
],
"changed": false,
"failed": false
}
TASK [Checking NETCONF connectivity] ***********************************************************************************************************
task path: /usr/local/share/ansible/junos/junos-get_facts.yml:9
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/utilities/logic/wait_for.py
<lab.r1> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<lab.r1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412 `" && echo ansible-tmp-1507431462.1-117888621897412="` echo $HOME/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412 `" ) && sleep 0'
<lab.r1> PUT /tmp/tmpW193y0 TO /usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/wait_for.py
<lab.r1> EXEC /bin/sh -c 'chmod u+x /usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/ /usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/wait_for.py && sleep 0'
<lab.r1> EXEC /bin/sh -c '/usr/bin/python /usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/wait_for.py; rm -rf "/usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/" > /dev/null 2>&1 && sleep 0'
ok: [lab.r1] => {
"changed": false,
"elapsed": 0,
"failed": false,
"invocation": {
"module_args": {
"active_connection_states": [
"ESTABLISHED",
"FIN_WAIT1",
"FIN_WAIT2",
"SYN_RECV",
"SYN_SENT",
"TIME_WAIT"
],
"connect_timeout": 5,
"delay": 0,
"exclude_hosts": null,
"host": "lab.r1",
"msg": null,
"path": null,
"port": 830,
"search_regex": null,
"sleep": 1,
"state": "started",
"timeout": 5
}
},
"path": null,
"port": 830,
"search_regex": null,
"state": "started"
}
TASK [Gather Facts] ****************************************************************************************************************************
task path: /usr/local/share/ansible/junos/junos-get_facts.yml:12
<lab.r1> using connection plugin netconf
<lab.r1> socket_path: None
fatal: [lab.r1]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to open shell. Please see: https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell"
}
to retry, use: --limit #/usr/local/share/ansible/junos/junos-get_facts.retry
PLAY RECAP *************************************************************************************************************************************
lab.r1 : ok=2 changed=0 unreachable=0 failed=1
The detailed log output shows the following:
2017-10-07 23:19:51,177 p=2906 u=ansible | TASK [Gather Facts] ****************************************************************************************************************************
2017-10-07 23:19:51,180 p=2906 u=ansible | task path: /usr/local/share/ansible/junos/junos-get_facts.yml:12
2017-10-07 23:19:52,739 p=2937 u=ansible | creating new control socket for host lab.r1:830 as user ansible
2017-10-07 23:19:52,740 p=2937 u=ansible | control socket path is /usr/local/share/ansible/.ansible/pc/b52ae79c72
2017-10-07 23:19:52,740 p=2937 u=ansible | current working directory is /usr/local/share/ansible/junos
2017-10-07 23:19:52,741 p=2937 u=ansible | using connection plugin netconf
2017-10-07 23:19:52,937 p=2937 u=ansible | network_os is set to junos
2017-10-07 23:19:52,951 p=2937 u=ansible | ssh connection done, stating ncclient
2017-10-07 23:19:52,982 p=2937 u=ansible | failed to create control socket for host lab.r1
2017-10-07 23:19:52,985 p=2937 u=ansible | Traceback (most recent call last):
File "/usr/local/bin/ansible-connection", line 316, in main
server = Server(socket_path, pc)
File "/usr/local/bin/ansible-connection", line 112, in __init__
self.connection._connect()
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/connection/netconf.py", line 158, in _connect
ssh_config=ssh_config
File "/usr/local/lib/python2.7/dist-packages/ncclient/manager.py", line 154, in connect
return connect_ssh(*args, **kwds)
File "/usr/local/lib/python2.7/dist-packages/ncclient/manager.py", line 116, in connect_ssh
session.load_known_hosts()
File "/usr/local/lib/python2.7/dist-packages/ncclient/transport/ssh.py", line 299, in load_known_hosts
self._host_keys.load(filename)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 97, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 358, in from_line
key = ECDSAKey(data=decodebytes(key), validate_point=False)
File "/usr/local/lib/python2.7/dist-packages/paramiko/ecdsakey.py", line 156, in __init__
self.verifying_key = numbers.public_key(backend=default_backend())
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/__init__.py", line 15, in default_backend
from cryptography.hazmat.backends.openssl.backend import backend
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/__init__.py", line 7, in <module>
from cryptography.hazmat.backends.openssl.backend import backend
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py", line 23, in <module>
from cryptography.hazmat.backends.openssl import aead
ImportError: cannot import name aead
2017-10-07 23:20:02,775 p=2906 u=ansible | fatal: [lab.r1]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to open shell. Please see: https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell"
}
Any help is appreciated.
The answer was:
Answered by "Paul Kehrer"
aead is being imported by the backend, but also can't be found. This sounds like it may be trying to import two different versions of cryptography. pycrypto is irrelevant here (it is an unrelated package). First I'd suggest upgrading cryptography, but since that aead was added in 2.0 you may need to make sure you don't have cryptography installed both via pip and also via your distribution package manager.
Once I removed pycrypto and cryptography via pip the playbook ran as expected:
TASK [version] *************************************************************************************************************************************************
task path: /usr/local/share/ansible/junos/junos-get_facts.yml:25
ok: [lab.r1] => {
"msg": "olive"
}
META: ran handlers
META: ran handlers
PLAY RECAP *****************************************************************************************************************************************************
lab.r1 : ok=5 changed=0 unreachable=0 failed=0
I'm writing Chef wrappers around some of the built in OpsWorks cookbooks. I'm using Berkshelf to clone the OpsWorks cookbooks from their github repo.
This is my Berksfile:
source 'https://supermarket.getchef.com'
metadata
def opsworks_cookbook(name)
cookbook name, github: 'aws/opsworks-cookbooks', branch: 'release-chef-11.10', rel: name
end
%w(dependencies scm_helper mod_php5_apache2 ssh_users opsworks_agent_monit
opsworks_java gem_support opsworks_commons opsworks_initial_setup
opsworks_nodejs opsworks_aws_flow_ruby
deploy mysql memcached).each do |cb|
opsworks_cookbook cb
end
My metadata.rb:
depends 'deploy'
depends 'mysql'
depends 'memcached'
The problem is, when I try to override attributes that depend on the opsworks key in the node hash, I get a:
NoMethodError
-------------
undefined method `[]=' for nil:NilClass
OpsWorks has a whole bunch of pre-recipe dependencies that create these keys and do a lot of their setup. I'd like to find a way to either pull in those services and run them against my Kitchen instances or mock them in a way that I can actually test my recipes.
Is there a way to do this?
I would HIGHLY recommend you check out Mike Greiling's blog post Simplify OpsWorks Development With Packer and his github repo opsworks-vm which help you to mock the entire opsworks stack including the install of the opsworks agent so you can also test app deploy recipes, multiple layers, multiple instances at the same time, etc . It is extremely impressive.
Quick Start on Ubuntu 14.04
NOTE: This can NOT be done from an ubuntu virtual machine because virtualbox does not support nested virtualization of 64-bit machines.
Install ChefDK
mkdir /tmp/packages && cd /tmp/packages
wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.8.1-1_amd64.deb
sudo dpkg -i chefdk_0.8.0-1_amd64.deb
cd /opt/chefdk/
chef verify
which ruby
echo 'eval "$(chef shell-init bash)"' >> ~/.bash_profile && source ~/.bash_profile
Install VirtualBox
echo 'deb http://download.virtualbox.org/virtualbox/debian vivid contrib' > /etc/apt/sources.list.d/virtualbox.list
wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
sudo apt-get update -qqy
sudo apt-get install virtualbox-5.0 dkms
Install Vagrant
cd /tmp/packages
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.deb
sudo dpkg -i vagrant_1.7.4_x86_64.deb
vagrant plugin install vagrant-berkshelf
vagrant plugin install vagrant-omnibus
vagrant plugin list
Install Packer
mkdir /opt/packer && cd /opt/packer
wget https://dl.bintray.com/mitchellh/packer/packer_0.8.6_linux_amd64.zip
unzip packer_0.8.6_linux_amd64.zip
echo 'PATH=$PATH:/opt/packer' >> ~/.bash_profile && source ~/.bash_profile
Build Mike Greiling's opsworks-vm virtualbox image using Packer
mkdir ~/packer && cd ~/packer
git clone https://github.com/pixelcog/opsworks-vm.git
cd opsworks-vm
rake build install
This will install a new virtualbox vm to ~/.vagrant.d/boxes/ubuntu1404-opsworks/
To mock a single opsworks instance, create a new Vagrantfile like so:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu1404-opsworks"
config.vm.provision :opsworks, type: 'shell', args: 'path/to/dna.json'
end
The dna.json file path is set relative to the Vagrantfile and should contain any JSON data you wish to send to OpsWorks Chef.
For example:
{
"deploy": {
"my-app": {
"application_type": "php",
"scm": {
"scm_type": "git",
"repository": "path/to/my-app"
}
}
},
"opsworks_custom_cookbooks": {
"enabled": true,
"scm": {
"repository": "path/to/my-cookbooks"
},
"recipes": [
"recipe[opsworks_initial_setup]",
"recipe[dependencies]",
"recipe[mod_php5_apache2]",
"recipe[deploy::default]",
"recipe[deploy::php]",
"recipe[my_custom_cookbook::configure]"
]
}
}
To mock multiple opsworks instances and include layers see his AWS OpsWorks "Getting Started" Example which includes the stack.json below.
Vagrantfile (for multiple instances)
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu1404-opsworks"
# Create the php-app layer
config.vm.define "app" do |layer|
layer.vm.provision "opsworks", type:"shell", args:[
'ops/dna/stack.json',
'ops/dna/php-app.json'
]
# Forward port 80 so we can see our work
layer.vm.network "forwarded_port", guest: 80, host: 8080
layer.vm.network "private_network", ip: "10.10.10.10"
end
# Create the db-master layer
config.vm.define "db" do |layer|
layer.vm.provision "opsworks", type:"shell", args:[
'ops/dna/stack.json',
'ops/dna/db-master.json'
]
layer.vm.network "private_network", ip: "10.10.10.20"
end
end
stack.json
{
"opsworks": {
"layers": {
"php-app": {
"instances": {
"php-app1": {"private-ip": "10.10.10.10"}
}
},
"db-master": {
"instances": {
"db-master1": {"private-ip": "10.10.10.20"}
}
}
}
},
"deploy": {
"simple-php": {
"application_type": "php",
"document_root": "web",
"scm": {
"scm_type": "git",
"repository": "dev/simple-php"
},
"memcached": {},
"database": {
"host": "10.10.10.20",
"database": "simple-php",
"username": "root",
"password": "correcthorsebatterystaple",
"reconnect": true
}
}
},
"mysql": {
"server_root_password": "correcthorsebatterystaple",
"tunable": {"innodb_buffer_pool_size": "256M"}
},
"opsworks_custom_cookbooks": {
"enabled": true,
"scm": {
"repository": "ops/cookbooks"
}
}
}
For those not familiar with vagrant you just do a vagrant up to start the instance(s). Then you can modify your cookbook locally and any changes can be applied by re-running chef against the existing instance(s) with vagrant provision. You can do a vagrant destroy and vagrant up to start from scratch.
You'll have to do it manually. You can add arbitrary attributes to your .kitchen.yml, just go on an OpsWorks machine and log the values you need and either use them directly or adapt them to workable test data.
I intended to using Vagrant,Chef-solo to establish a AWS environment.But I got some errors that I can not solve.Anybody can help me?
The steps I used:
Install all necessary environment on Mac OS X: such as vagrant, vagrant plugin, virtual box, chef, chef plugin and so on.
Download vagrant configuration files:
git clone https://github.com/ICTatRTI/ict-chef-repo
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
#config.vm.box_url = "https://opscode-vm.s3.amazonaws.com/vagrant/opscode_ubuntu-12.04_chef-11.2.0.box"
#config.vm.box = "opscode-ubuntu-1204"
config.vm.box = "dummy"
config.vm.network :forwarded_port, guest: 80, host: 8888
config.vm.network :forwarded_port, guest: 3306, host: 3333
config.ssh.username = "ubuntu"
config.vm.provider :aws do |aws, override|
#config.vm.provider :aws do |aws|
aws.access_key_id = 'XXXXXXXXXXXXXXXQ'
aws.secret_access_key = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
aws.keypair_name = "usr-aws-2013"
aws.availability_zone = "us-west-2c"
aws.instance_type = "t1.micro"
aws.region = "us-west-2"
aws.ami = "ami-0849a03f"
aws.security_groups = ['quicklaunch-1']
aws.tags = {
'Name' => 'tong',
'Description' => 'vagrant test'
}
override.ssh.private_key_path = "~/.ssh/usr-aws-2013.pem"
override.ssh.username = "ubuntu"
end
config.vm.provision :chef_solo do |chef|
chef.node_name = 'base'
chef.cookbooks_path = "./cookbooks"
chef.roles_path = "./roles"
chef.add_role "base"
chef.add_role "ushahidi"
end
end
Run:
vagrant up --provider=aws
Got the following errors
Bringing machine 'default' up with 'aws' provider...
WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1
[default] Warning! The AWS provider doesn't support any of the Vagrant
high-level network configurations (`config.vm.network`). They
will be silently ignored.
[default] Launching an instance with the following settings...
[default] -- Type: t1.micro
[default] -- AMI: ami-0849a03f
[default] -- Region: us-west-2
[default] -- Availability Zone: us-west-2c
[default] -- Keypair: usr-aws-2013
[default] -- Security Groups: ["quicklaunch-1"]
[default] -- Block Device Mapping: []
[default] -- Terminate On Shutdown: false
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.
An unexpected error ocurred when executing the action on the
'default' machine. Please report this as a bug:
The image id '[ami-0849a03f]' does not exist
Instance and AMI are different things and they have different numbers too. So if you have i-bddcf889 you cannot reference it in your Vagrantfile as ami-bddcf889.
Instead you don't have to create/start instance manually - you must provide ami from which Vagrant will create instance itself. For example take the one you made instance manually from.