How to check what installed by Ansible? - virtualbox

I cloned a Directory from Github and setup that itself called Ansible. Now it did make a virtual machine along with installed many things. I could not save all installed on my virtual box. The script in the end gave following output
: ok=29 changed=17 unreachable=0 failed=0
Now How do I see what was installed on Virtual Box?

Add --list-tasks option to the ansible-playbook command to list out a high level tasks been executed by ansible. Which should give you a idea of what ansible has orchestrated on the VM.
ansible-playbook --list-tasks playbook.yml

Ansible logs everything it does on a remote server to syslog. Access your virtual machine and grep syslog with 'ansible-command'. This way you can see exactly what Ansible tasks were run on the virtual machine.

Related

AWS EC2 User Data not working (Tried Installing and starting httpd via User Data)

The Following is my EC2 User Data:
#!/bin/bash
sudo yum update -y
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
In Security Group SSH 22 Port and HTTP 80 Port is Open.
Yet when I try accessing http://public_ip_of_instance the HTTP Apache page doesn't load.
Also, on the Instance Apache is not installed when I checked sudo systemctl status httpd.
I then manually tried it on the EC2 Server and it worked. Then I removed it through yum remove as I wanted to see whether User Data works.
I stopped the Instance and started again but I observed that the User Data Script doesn't work as I am unable to access http page through browser and also on Instance http is not installed.
Where is the actual issue? Some months back this same thing worked on another instance I remember.
Your user data is correct. Whatever is happening with your website is not due to the user data code that you provided.
There could be many reasons it does not work. Public IP of the instance has changed, as always happens when you stop/start the instance. Instance may have per-existing software that clashes with httpd.
Here's some general advice on running UserData once or each startup.
Short answer as John mentioned in the comments EC2's only run the UserData (aka Bootstrap) script once on initalization.
The user data Bash/Powershell is Infrastructure-As-Code. You deploy the script and it installs and configures the machine.
This causes confusion with everyone starting AWS. When you think about it though it doesn't make sense to run the UserData script each time when the PCs already been configured.
What people do often instead is make "Golden Images" (aka Amazon Machine Images - AMI's) of pre-setup EC2s, typically for PCs that take long time to install/configure. The beauty of this is you can setup AutoScaleGroups to use the images which saves any long installation during a scale up event.
Pro Tip: When developing an UserData script run through and test it manually on the EC2. Trust me its far quicker than troubleshooting unattended EC2 UserData errors.
Long answer: you can run the UserData on each boot of the machine using Mime multi-part file. A mime multi-part file allows your script to override how frequently user data is run in the cloud-init package.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
For all those who will run into this problem, first of all check the log with the command:
sudo cat /var/log/cloud-init-output.log
then if you notice connection errors to the various repositories, the reason is because you don't have an internet connection. However, if once inside your EC2 you manage to launch the update and install commands, then the reason why they fail in the UserData is because your EC2 takes a few seconds to get the Internet connection and executes the commands before having it. So to solve this problem, just add this command after #!/bin/bash
#!/bin/bash
until ping -c1 8.8.8.8 &>/dev/null; do :; done
sudo yum update -y
...
This will prevent your EC2 from executing commands before an internet connection is established

Is it possible to run an Ansible playbook from a Chef AWS/Opworks cookbook?

I try to figure out if it's possible to create a Chef cookbook that ssh into an Ansible server and run some Ansible cookbook from AWS Opworks on the current node
I think of a script that I can put in a execute like this :
define :foobar_magento2_deploy do
release_path = node[:app_release_path]
execute 'Ansible playbook' do
command "ssh -i key ansible-server 'ansible-playbook arg1 arg2'"
end
end
Do you think it's possible ? Is there some caveats ? Hints ?
Edit from #coderanger answer:
define :foobar_magento2_deploy do
release_path = node[:app_release_path]
execute 'Ansible playbook' do
command "git clone ansible-playbook"
command "cd ansible-playbook"
command "ansible-playbook -l localhost playbook.yml"
end
end
So a couple of things:
OpsWorks Stacks is dangerously out of date and using it should be considered highly suspect.
I don't actually recognize that define block thing in there, maybe that's an older OpsWorks syntax?
You can definitely run an Ansible playbook from Chef code, but I would probably go a little simpler than you have there. Probably just run ansible-playbook locally and aim it at localhost.

Not able to Start/Stop Spark Worker from Remote Machine

I have two machines A and B. I am trying to run Spark Master on machine A and Spark Worker on machine B.
I have set machine B's host name in conf/slaves in my Spark directory.
When I am executing start-all.sh to start master and workers, I am getting below message on console:
abc#abc-vostro:~/spark-scala-2.10$ sudo sh bin/start-all.sh
sudo: /etc/sudoers.d is world writable
starting spark.deploy.master.Master, logging to /home/abc/spark-scala-2.10/bin/../logs/spark-root-spark.deploy.master.Master-1-abc-vostro.out
13/09/11 14:54:29 WARN spark.Utils: Your hostname, abc-vostro resolves to a loopback address: 127.0.1.1; using 1XY.1XY.Y.Y instead (on interface wlan2)
13/09/11 14:54:29 WARN spark.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Master IP: abc-vostro
cd /home/abc/spark-scala-2.10/bin/.. ; /home/abc/spark-scala-2.10/bin/start-slave.sh 1 spark://abc-vostro:7077
xyz#1XX.1XX.X.X's password:
xyz#1XX.1XX.X.X: bash: line 0: cd: /home/abc/spark-scala-2.10/bin/..: No such file or directory
xyz#1XX.1XX.X.X: bash: /home/abc/spark-scala-2.10/bin/start-slave.sh: No such file or directory
Master is started but worker is failed to start.
I have set xyz#1XX.1XX.X.X in conf/slaves in my Spark directory.
Can anyone help me to resolve this? This is probably something I'm missing any configuration on my end.
However when I create Spark Master and Worker on same machine, It is working fine.
Have you copied all Spark's files at the worker too? Also you need to setup password less access b/w master and worker.
Here were steps I would follow,
Setting up public key authentication over SSH
Checking /etc/spark/conf.dist/spark-env.sh
scp this to your computer B from computer A (master)
Set conf/slaves, hostname for computer B
./start-all.sh
For standalone cluster mode, you may set these option in spark-env.sh.
For example,
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=4G
see SSH ACCESS, in hadoop multinode cluster setup by michael. just like that .... will solve ur probs..
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

Vagrant Rsync Error before provisioning

So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).

How do I associate a Vagrant project directory with an existing VirtualBox VM?

Somehow my Vagrant project has disassociated itself from its VirtualBox VM, so that when I vagrant up Vagrant will import the base-box and create a new virtual machine.
Is there a way to re-associate the Vagrant project with the existing VM?
How does Vagrant internally associate a Vagrantfile with a VirtualBox VM directory?
For Vagrant 1.6.3 do the following:
1) In the directory where your Vagrantfile is located, run the command
VBoxManage list vms
You will have something like this:
"virtualMachine" {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
2) Go to the following path:
cd .vagrant/machines/default/virtualbox
3) Create a file called id with the ID of your VM xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
4) Save the file and run vagrant up
WARNING: The solution below works for Vagrant 1.0.x but not Vagrant 1.1+.
Vagrant uses the ".vagrant" file in the same directory as your "Vagrantfile" to track the UUID of your VM. This file will not exist if a VM does not exist. The format of the file is JSON. It looks like this if a single VM exists:
{
"active":{
"default":"02f8b71c-75c6-4f33-a161-0f46a0665ab6"
}
}
default is the name of the default virtual machine (if you're not using multi-VM setups).
If your VM has somehow become disassociated, what you can do is do VBoxManage list vms which will list every VM that VirtualBox knows about by its name and UUID. Then manually create a .vagrant file in the same directory as your Vagrantfile and fill in the contents properly.
Run vagrant status to ensure that Vagrant picked up the proper changes.
Note: This is not officially supported by Vagrant and Vagrant may change the format of .vagrant at any time. But this is valid as of Vagrant 0.9.7 and will be valid for Vagrant 1.0.
The solution with upper version is quite the same.
But first you need to launch the .vbox file by hand so that it appear in VBoxManage list vms
Then you can check the .vagrant/machines/default/virtualbox/id to check that the uuid is the right one.
Had the issue today, my .vagrant folder was missing and found that there was a few more steps than simply setting the id:
Set the id:
VBoxManage list vms
Find the id and set in {project-folder}/.vagrant/machines/default/virtualbox/id.
Note that default may be different if set in your Vagrantfile e.g. config.vm.define "someothername".
Stop the machine from provisioning:
Create a file named action_provision in the same dir as the id file, set it's contents to: 1.5:{id} replacing {id} with the id found in step 1.
Setup a new public/private key:
Vagrant uses a private key stored in .vagrant/machines/default/virtualbox/private_key to ssh into the machine. You'll need to generate a new one.
ssh-keygen -t rsa
name it private_key.
vagrant ssh then copy the private_key.pub into /home/vagrant/.ssh/authorized_keys.
Update with same problem today with Vagrant 1.7.4:
useful thread at https://github.com/mitchellh/vagrant/issues/1755
and specially with following commands:
For example, to pair box 'vip-quickstart_default_1431365185830_12124' to vagrant.
$ VBoxManage list
"vip-quickstart_default_1431365185830_12124" {50feafd3-74cd-40b5-a170-3c976348de27}
$ echo -n "50feafd3-74cd-40b5-a170-3c976348de27" > .vagrant/machines/default/virtualbox/id
For multi-VM setups, it would look like this:
{
"active":{
"web":"a1fc9ae4-5d43-49cb-be31-ab3c4f74745d",
"db":"13503bc5-76b8-4c26-95c4-32435b372212"
}
}
You can get the vm names from the Vagrantfile used to create those VMs. Look for this line:
config.vm.define :web do |web_config|
"web" is the name of the vm in this case.
This is modified from #Petecoop's answer.
Run vagrant halt if you haven't shut down the box yet.
Then list your virtualboxes: VBoxManage list vms
It'll list all of your virtualboxes. Identify the box you want to revert to and grab the id between the curly brackets: {}.
Then edit the project id file: sudo nano .vagrant/machines/default/virtualbox/id (from the project directory)
Replace it with the id you copied from the list of VBs.
Try vagrant reload.
If that doesn't work and gets hung on SSH authorization (where I stumbled), copy the insecure public key from the vagrant git. Replace the content of /.vagrant/machines/default/virtualbox/private_key. Backup the original of course: cp private_key private_key-bak.
Then run vagrant reload. It'll say it's identified the insecure key and create a new one.
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
You should be all set.
I'm using Vagrant 1.8.1 on OSX El Capitan
My vm was not shut correctly when my computer restarted, so when i tried vagrant up it was always creating new vm. No solutions here worked for me. But what did work was a variation of ingmmurillo's answer
So instead of creating .vagrant/machines/default/virtualbox/id based on the id from running VBoxManage list vms. I had to update the id in .vagrant/machines/local/virtual_box/id
I've got a one liner that essentially does this for me:
echo -n `VBoxManage list vms | head -n 1 | awk '{print substr($2, 2, length($2)-2)}'` > .vagrant/machines/local/virtualbox/id
This assumes the first box is the one i need to start from running VBoxManage list vms
In Vagrant 1.9.1:
I had a VM in Virtual Box named 'Ubuntu 16.04.1' so I packaged it as a vagrant box with:
vagrant package --base "Ubuntu 16.04.1"
responds with...
==> Ubuntu 16.04.1: Exporting VM...
==> Ubuntu 16.04.1: Compressing package to: blah blah/package.box
I'm on macos and found that removing the .locks on the boxes solved my problem.
For some reason
vagrant halt
did not remove these locks, and after restoring all my settings in .vagrant/machine/default/virtualbox using timemachine, removing the locks, the right machine booted up.
Only 1 minor problem remains, It booted into grub so I had to press enter once, don't know if this is staying, but I will find out soon enough.
I'm running vagrant 1.7.4 and virtualbox 5.0.2
for me deleting the
cd yourVagrantProject/.vagrant/machines/default/virtualbox/
rm id
worked.