Pack + AWS + Ansible + Windows are not WORK? - amazon-web-services

I want to make AMI file from packer and ansible.
I have tried many configuration, but I have still a problem of connection to the instance.
Here is my packer conf:
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"region": "us-east-1"
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{ user `aws_access_key` }}",
"secret_key": "{{ user `aws_secret_key` }}",
"region": "{{ user `region` }}",
"instance_type": "t2.micro",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "*Windows_Server-2012-R2*English-64Bit-Base*",
"root-device-type": "ebs"
},
"most_recent": true,
"owners": "amazon"
},
"ami_name": "packer-demo-{{timestamp}}",
"user_data_file": "userdata/windows-aws.txt",
"communicator": "winrm",
"winrm_username": "Administrator"
}],
"provisioners": [{
"type": "powershell",
"inline": [
"dir c:\\"
]
},
{
"type": "ansible",
"playbook_file": "./win-playbook.yml",
"extra_arguments": [
"--connection", "packer", "-vvv",
"--extra-vars", "ansible_shell_type=powershell ansible_shell_executable=None"
]
}]
}
The User data script is activating winrm on the AWS instance:
<powershell>
winrm quickconfig -q
winrm set winrm/config/winrs '#{MaxMemoryPerShellMB="300"}'
winrm set winrm/config '#{MaxTimeoutms="1800000"}'
winrm set winrm/config/service '#{AllowUnencrypted="true"}'
winrm set winrm/config/service/auth '#{Basic="true"}'
netsh advfirewall firewall add rule name="WinRM 5985" protocol=TCP dir=in localport=5985 action=allow
netsh advfirewall firewall add rule name="WinRM 5986" protocol=TCP dir=in localport=5986 action=allow
net stop winrm
sc config winrm start=auto
net start winrm
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope LocalMachine
</powershell>
Here is win-playbook.yml file:
---
- hosts: all
tasks:
- win_ping:
I do have the packer.py installed in the ~/.ansible/plugins/connection_plugins/ directory and configured in ~/.ansible.cfg:
root#ip-172-31-30-11:~/demo# grep connection_plugins /etc/ansible/ansible.cfg
connection_plugins = /root/.ansible/plugins/connection_plugins
root#ip-172-31-30-11:~/demo# ll /root/.ansible/plugins/connection_plugins
total 16
drwx------ 2 root root 4096 May 2 16:58 ./
drwx------ 4 root root 4096 May 2 17:11 ../
-rwx--x--x 1 root root 511 May 2 16:53 packer.py*
and then this is output error:
==> amazon-ebs: Provisioning with Ansible...
==> amazon-ebs: Executing Ansible: ansible-playbook --extra-vars packer_build_name=amazon-ebs packer_builder_type=amazon-ebs -i /tmp/packer-provisioner-ansible962278842 /root/demo/win-playbook.yml -e ansible_ssh_private_key_file=/tmp/ansible-key842946567 --connection packer -vvv --extra-vars ansible_shell_type=powershell ansible_shell_executable=None
amazon-ebs: ansible-playbook 2.5.2
amazon-ebs: config file = /etc/ansible/ansible.cfg
amazon-ebs: configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
amazon-ebs: ansible python module location = /usr/lib/python2.7/dist-packages/ansible
amazon-ebs: executable location = /usr/bin/ansible-playbook
amazon-ebs: python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
amazon-ebs: Using /etc/ansible/ansible.cfg as config file
amazon-ebs: Parsed /tmp/packer-provisioner-ansible962278842 inventory source with ini plugin
amazon-ebs:
amazon-ebs: PLAYBOOK: win-playbook.yml *****************************************************
amazon-ebs: 1 plays in /root/demo/win-playbook.yml
amazon-ebs:
amazon-ebs: PLAY [all] *********************************************************************
amazon-ebs:
amazon-ebs: TASK [Gathering Facts] *********************************************************
amazon-ebs: task path: /root/demo/win-playbook.yml:2
amazon-ebs: Using module file /usr/lib/python2.7/dist-packages/ansible/modules/windows/setup.ps1
amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: root
amazon-ebs: The full traceback is:
amazon-ebs: Traceback (most recent call last):
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 138, in run
amazon-ebs: res = self._execute()
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 558, in _execute
amazon-ebs: result = self._handler.run(task_vars=variables)
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/plugins/action/normal.py", line 46, in run
amazon-ebs: result = merge_hash(result, self._execute_module(task_vars=task_vars, wrap_async=wrap_async))
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py", line 705, in _execute_module
amazon-ebs: self._make_tmp_path()
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py", line 251, in _make_tmp_path
amazon-ebs: result = self._low_level_execute_command(cmd, sudoable=False)
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py", line 902, in _low_level_execute_command
amazon-ebs: rc, stdout, stderr = self._connection.exec_command(cmd, in_data=in_data, sudoable=sudoable)
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/plugins/connection/ssh.py", line 976, in exec_command
amazon-ebs: use_tty = self.get_option('use_tty')
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/plugins/__init__.py", line 58, in get_option
amazon-ebs: option_value = C.config.get_config_value(option, plugin_type=get_plugin_class(self), plugin_name=self._load_name, variables=hostvars)
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/config/manager.py", line 284, in get_config_value
amazon-ebs: value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name, keys=keys, variables=variables)
amazon-ebs: File "/usr/lib/python2.7/dist-packages/ansible/config/manager.py", line 304, in get_config_value_and_origin
amazon-ebs: defs = self._plugins[plugin_type][plugin_name]
amazon-ebs: KeyError: 'connection'
amazon-ebs: fatal: [default]: FAILED! => {
amazon-ebs: "msg": "Unexpected failure during module execution.",
amazon-ebs: "stdout": ""
amazon-ebs: }
amazon-ebs: to retry, use: --limit #/root/demo/win-playbook.retry
amazon-ebs:
amazon-ebs: PLAY RECAP *********************************************************************
amazon-ebs: default : ok=0 changed=0 unreachable=0 failed=1
packer version: 1.2.3
ansible version: 2.5.2

It looks like this issue is common for Ansible 2.5.x and Packer. Adarobin commented on the packer issue https://github.com/hashicorp/packer/issues/5845. We ran into the same issue, tested the solution and it worked for us.
I was hitting the KeyError: 'connection' issue with Ansible 2.5 on
Packer 1.2.2 with the AWS builder and I think I have discovered the
issue. It looks like Ansible now requires plugins to have a
documentation string. I copied the documentation string from the SSH
connection plugin (since that is what the packer plugin is based on)
made a few changes and my packer.py now looks like this.
https://gist.github.com/adarobin/2f02b8b993936233e15d76f6cddb9e00

Related

packer provisioning by ansible fails in aws codebuild

My Codebuild project that it creates AMI by packer by ansible provisioner.
This packer settings success in my local environment and Amazon linux2 ec2 environment. However, when I use AWS Codebuild with aws/codebuild/amazonlinux2-x86_64-standard:1.0 image and it fails.
I already tried this settings remote_tmp = /tmp or remote_tmp = /tmp/.ansible-${USER}/tmp but did not work.
Authentication or permission failure, did not have permissions on the remote directory
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- python --version
- pip --version
- curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.4.3/packer_1.4.3_linux_amd64.zip && unzip packer.zip
- ./packer version
- pip install --user ansible==2.8.5
- ansible --version
- echo 'Validate packer json'
- ./packer validate packer.json
build:
commands:
- ./packer build -color=false packer.json | tee build.log
{
"builders": [{
"type": "amazon-ebs",
"region": "ap-northeast-1",
"ami_regions": "ap-northeast-1",
"source_ami": "ami-0ff21806645c5e492",
"instance_type": "t2.micro",
"ssh_username": "ec2-user",
"ami_name": "packer-quick-start {{timestamp}}",
"ami_description": "created by packer at {{timestamp}}",
"ebs_optimized": false,
"tags": {
"OS_Version": "Amazon Linux AMI 2018.03",
"timestamp": "{{timestamp}}",
"isotime": "{{isotime \"2006-01-02 03:04:05\"}}"
},
"disable_stop_instance": false
}],
"provisioners": [
{
"type" : "ansible",
"extra_arguments": [
"-vvv"
],
"playbook_file" : "ansible/main.yaml"
}
]
}
==> amazon-ebs: Prevalidating AMI Name: packer-quick-start 1569943272
amazon-ebs: Found Image ID: ami-0ff21806645c5e492
==> amazon-ebs: Creating temporary keypair: packer_5d936ee8-541f-5c9a-6955-9672526afc1a
==> amazon-ebs: Creating temporary security group for this instance: packer_5d936ef1-6546-d9d0-60ff-2dc4c011036f
==> amazon-ebs: Authorizing access to port 22 from [0.0.0.0/0] in the temporary security groups...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Name": "Packer Builder"
amazon-ebs: Instance ID: i-04b00db56a8b3b6d0
==> amazon-ebs: Waiting for instance (i-04b00db56a8b3b6d0) to become ready...
==> amazon-ebs: Using ssh communicator to connect: 3.112.61.8
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Provisioning with Ansible...
==> amazon-ebs: Executing Ansible: ansible-playbook --extra-vars packer_build_name=amazon-ebs packer_builder_type=amazon-ebs -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible244097143 /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml -e ansible_ssh_private_key_file=/tmp/ansible-key242793848 -vvv
amazon-ebs: ansible-playbook 2.8.5
amazon-ebs: config file = /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible.cfg
amazon-ebs: configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
amazon-ebs: ansible python module location = /root/.local/lib/python3.7/site-packages/ansible
amazon-ebs: executable location = /root/.local/bin/ansible-playbook
amazon-ebs: python version = 3.7.4 (default, Sep 20 2019, 22:55:10) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)]
amazon-ebs: Using /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible.cfg as config file
amazon-ebs: host_list declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: script declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: auto declined parsing /tmp/packer-provisioner-ansible244097143 as it did not pass it's verify_file() method
amazon-ebs: Parsed /tmp/packer-provisioner-ansible244097143 inventory source with ini plugin
amazon-ebs:
amazon-ebs: PLAYBOOK: main.yaml ************************************************************
amazon-ebs: 1 plays in /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml
amazon-ebs:
amazon-ebs: PLAY [all] *********************************************************************
amazon-ebs: META: ran handlers
amazon-ebs:
amazon-ebs: TASK [be sure httpd is installed] **********************************************
amazon-ebs: task path: /codebuild/output/src965785042/src/github.com/repoUsername/reponame/ansible/main.yaml:6
amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: root
amazon-ebs: <127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=35595 -o 'IdentityFile="/tmp/ansible-key242793848"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/02aaab1733 127.0.0.1 '/bin/sh -c '"'"'echo ~root && sleep 0'"'"''
amazon-ebs: <127.0.0.1> (0, b'/root\n', b"Warning: Permanently added '[127.0.0.1]:35595' (RSA) to the list of known hosts.\r\n")
amazon-ebs: <127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: root
amazon-ebs: <127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=35595 -o 'IdentityFile="/tmp/ansible-key242793848"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/02aaab1733 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `" && echo ansible-tmp-1569943320.4544108-49329379039882="` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `" ) && sleep 0'"'"''
amazon-ebs: <127.0.0.1> (1, b'', b'mkdir: cannot create directory \xe2\x80\x98/root\xe2\x80\x99: Permission denied\n')
amazon-ebs: <127.0.0.1> Failed to connect to the host via ssh: mkdir: cannot create directory ‘/root’: Permission denied
amazon-ebs: fatal: [default]: UNREACHABLE! => {
amazon-ebs: "changed": false,
amazon-ebs: "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `\" && echo ansible-tmp-1569943320.4544108-49329379039882=\"` echo /root/.ansible/tmp/ansible-tmp-1569943320.4544108-49329379039882 `\" ), exited with result 1",
amazon-ebs: "unreachable": true
amazon-ebs: }
amazon-ebs:
amazon-ebs: PLAY RECAP *********************************************************************
amazon-ebs: default : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
amazon-ebs:
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
I know it fails because it tried to mkdir /root and Permission denied.
But don't know why it tried to mkdir /root. How can I change this behavior?
I solved and it was super simple cause.
Because AWS Codebuild builds by the root user, ansible makes a connection by the root user. I just wrote like this and solved it.
"provisioners": [
{
"type" : "ansible",
"user": "ec2-user",
"playbook_file" : "ansible/main.yaml"
}
]
My ansible file is simple for testing.
---
- hosts: all
become: yes
gather_facts: no
tasks:
- name: be sure httpd is installed
yum: name=httpd state=installed
- name: be sure httpd is running and enabled
service: name=httpd state=started enabled=yes

How do I identify an issue with crypto while running an Ansible playbbok?

Trying to run a simple gather_ facts playbook using Ansible. I can connect via SSH using the user credentials with no issues but for a reason I cannot get my head around the playbook fails with the following message:
2017-10-07 22:57:44,248 ncclient.transport.ssh Unknown exception: cannot import name aead
OS: Ubuntu (Ubuntu 16.04.3 LTS)
Destination Router: Virtualbox JunOS Olive [12.1R1.9]
Ansible Version: 2.4.0.0
hosts:
[all:vars]
ansible_python_interpreter=/usr/bin/python
ansible_connection = local
[junos]
lab.r1
Playbook:
---
- hosts: junos
gather_facts: no
tasks:
- name: obtain login credentials
include_vars: ../auth/secrets.yml
- name: Checking NETCONF connectivity
wait_for: host={{ inventory_hostname }} port=830 timeout=5
- name: Gather Facts
junos_facts:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
register: junos
- name: version
debug: msg="{{ junos.facts.version }}"
Playbook output:
$ ansible-playbook -vvvv junos-get_facts.yml
ansible-playbook 2.4.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/local/lib/python2.7/dist-packages/ansible/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: junos-get_facts.yml ******************************************************************************************************************
1 plays in junos-get_facts.yml
PLAY [junos] ***********************************************************************************************************************************
META: ran handlers
TASK [obtain login credentials] ****************************************************************************************************************
task path: /usr/local/share/ansible/junos/junos-get_facts.yml:6
Trying secret FileVaultSecret(filename='/usr/local/share/ansible/auth/vault/vault_pass.py') for vault_id=default
ok: [lab.r1] => {
"ansible_facts": {
"creds": {
"password": "*******",
"username": "ansible"
}
},
"ansible_included_var_files": [
"/usr/local/share/ansible/junos/../auth/secrets.yml"
],
"changed": false,
"failed": false
}
TASK [Checking NETCONF connectivity] ***********************************************************************************************************
task path: /usr/local/share/ansible/junos/junos-get_facts.yml:9
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/utilities/logic/wait_for.py
<lab.r1> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<lab.r1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412 `" && echo ansible-tmp-1507431462.1-117888621897412="` echo $HOME/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412 `" ) && sleep 0'
<lab.r1> PUT /tmp/tmpW193y0 TO /usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/wait_for.py
<lab.r1> EXEC /bin/sh -c 'chmod u+x /usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/ /usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/wait_for.py && sleep 0'
<lab.r1> EXEC /bin/sh -c '/usr/bin/python /usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/wait_for.py; rm -rf "/usr/local/share/ansible/.ansible/tmp/ansible-tmp-1507431462.1-117888621897412/" > /dev/null 2>&1 && sleep 0'
ok: [lab.r1] => {
"changed": false,
"elapsed": 0,
"failed": false,
"invocation": {
"module_args": {
"active_connection_states": [
"ESTABLISHED",
"FIN_WAIT1",
"FIN_WAIT2",
"SYN_RECV",
"SYN_SENT",
"TIME_WAIT"
],
"connect_timeout": 5,
"delay": 0,
"exclude_hosts": null,
"host": "lab.r1",
"msg": null,
"path": null,
"port": 830,
"search_regex": null,
"sleep": 1,
"state": "started",
"timeout": 5
}
},
"path": null,
"port": 830,
"search_regex": null,
"state": "started"
}
TASK [Gather Facts] ****************************************************************************************************************************
task path: /usr/local/share/ansible/junos/junos-get_facts.yml:12
<lab.r1> using connection plugin netconf
<lab.r1> socket_path: None
fatal: [lab.r1]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to open shell. Please see: https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell"
}
to retry, use: --limit #/usr/local/share/ansible/junos/junos-get_facts.retry
PLAY RECAP *************************************************************************************************************************************
lab.r1 : ok=2 changed=0 unreachable=0 failed=1
The detailed log output shows the following:
2017-10-07 23:19:51,177 p=2906 u=ansible | TASK [Gather Facts] ****************************************************************************************************************************
2017-10-07 23:19:51,180 p=2906 u=ansible | task path: /usr/local/share/ansible/junos/junos-get_facts.yml:12
2017-10-07 23:19:52,739 p=2937 u=ansible | creating new control socket for host lab.r1:830 as user ansible
2017-10-07 23:19:52,740 p=2937 u=ansible | control socket path is /usr/local/share/ansible/.ansible/pc/b52ae79c72
2017-10-07 23:19:52,740 p=2937 u=ansible | current working directory is /usr/local/share/ansible/junos
2017-10-07 23:19:52,741 p=2937 u=ansible | using connection plugin netconf
2017-10-07 23:19:52,937 p=2937 u=ansible | network_os is set to junos
2017-10-07 23:19:52,951 p=2937 u=ansible | ssh connection done, stating ncclient
2017-10-07 23:19:52,982 p=2937 u=ansible | failed to create control socket for host lab.r1
2017-10-07 23:19:52,985 p=2937 u=ansible | Traceback (most recent call last):
File "/usr/local/bin/ansible-connection", line 316, in main
server = Server(socket_path, pc)
File "/usr/local/bin/ansible-connection", line 112, in __init__
self.connection._connect()
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/connection/netconf.py", line 158, in _connect
ssh_config=ssh_config
File "/usr/local/lib/python2.7/dist-packages/ncclient/manager.py", line 154, in connect
return connect_ssh(*args, **kwds)
File "/usr/local/lib/python2.7/dist-packages/ncclient/manager.py", line 116, in connect_ssh
session.load_known_hosts()
File "/usr/local/lib/python2.7/dist-packages/ncclient/transport/ssh.py", line 299, in load_known_hosts
self._host_keys.load(filename)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 97, in load
e = HostKeyEntry.from_line(line, lineno)
File "/usr/local/lib/python2.7/dist-packages/paramiko/hostkeys.py", line 358, in from_line
key = ECDSAKey(data=decodebytes(key), validate_point=False)
File "/usr/local/lib/python2.7/dist-packages/paramiko/ecdsakey.py", line 156, in __init__
self.verifying_key = numbers.public_key(backend=default_backend())
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/__init__.py", line 15, in default_backend
from cryptography.hazmat.backends.openssl.backend import backend
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/__init__.py", line 7, in <module>
from cryptography.hazmat.backends.openssl.backend import backend
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py", line 23, in <module>
from cryptography.hazmat.backends.openssl import aead
ImportError: cannot import name aead
2017-10-07 23:20:02,775 p=2906 u=ansible | fatal: [lab.r1]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to open shell. Please see: https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell"
}
Any help is appreciated.
The answer was:
Answered by "Paul Kehrer"
aead is being imported by the backend, but also can't be found. This sounds like it may be trying to import two different versions of cryptography. pycrypto is irrelevant here (it is an unrelated package). First I'd suggest upgrading cryptography, but since that aead was added in 2.0 you may need to make sure you don't have cryptography installed both via pip and also via your distribution package manager.
Once I removed pycrypto and cryptography via pip the playbook ran as expected:
TASK [version] *************************************************************************************************************************************************
task path: /usr/local/share/ansible/junos/junos-get_facts.yml:25
ok: [lab.r1] => {
"msg": "olive"
}
META: ran handlers
META: ran handlers
PLAY RECAP *****************************************************************************************************************************************************
lab.r1 : ok=5 changed=0 unreachable=0 failed=0

how use aws profile when using ansible ec2.py module

I wrote a quick ansible playbook to launch a simple ec2 instance but I think I have an issue on how I want to authenticate.
What I don't want to do is set my aws access/secret keys as env variables since they expire each hour and I need to regenerate the ~/.aws/credentials file via a script.
Right now, my ansible playbook looks like this:
--- # Launch ec2
- name: Create ec2 instance
hosts: local
connection: local
gather_facts: false
vars:
profile: profile_xxxx
key_pair: usrxxx
region: us-east-1
subnet: subnet-38xxxxx
security_groups: ['sg-e54xxxx', 'sg-bfcxxxx', 'sg-a9dxxx']
image: ami-031xxx
instance_type: t2.small
num_instances: 1
tag_name: ansibletest
hdd_volumes:
- device_name: /dev/sdf
volume_size: 50
delete_on_termination: true
- device_name: /dev/sdh
volume_size: 50
delete_on_termination: true
tasks:
- name: launch ec2
ec2:
count: 1
key_name: "{{ key_pair }}"
profile: "{{ profile }}"
group_id: "{{ security_groups }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: false
volumes: "{{ hdd_volumes }}"
instance_tags:
Name: "{{ tag_name }}"
ASV: "{{ tag_asv }}"
CMDBEnvironment: "{{ tag_cmdbEnv }}"
EID: "{{ tag_eid }}"
OwnerContact: "{{ tag_eid }}"
register: ec2
- name: print ec2 vars
debug: var=ec
my hosts file is this:
[local]
localhost ansible_python_interpreter=/usr/local/bin/python2.7
I run my playbook like this:
ansible-playbook -i hosts launchec2.yml -vvv
and then get this back:
PLAYBOOK: launchec2.yml ********************************************************
1 plays in launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [launch ec2] **************************************************************
task path: /Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.yml:27
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: usrxxx
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730 `" && echo ansible-tmp-1485527483.82-106272618422730="` echo ~/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730 `" ) && sleep 0'
<localhost> PUT /var/folders/cx/_fdv7nkn6dz21798p_bn9dp9ln9sqc/T/tmpnk2rh5 TO /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py
<localhost> PUT /var/folders/cx/_fdv7nkn6dz21798p_bn9dp9ln9sqc/T/tmpEpwenH TO /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/env python /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args; rm -rf "/Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ec2"
},
"module_stderr": "usage: ec2.py [-h] [--list] [--host HOST] [--refresh-cache]\n [--profile BOTO_PROFILE]\nec2.py: error: unrecognized arguments: /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #/Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
I noticed in the ec2.py file it says this:
NOTE: This script assumes Ansible is being executed where the environment
variables needed for Boto have already been set:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
This script also assumes there is an ec2.ini file alongside it. To specify a
different path to ec2.ini, define the EC2_INI_PATH environment variable:
export EC2_INI_PATH=/path/to/my_ec2.ini
If you're using eucalyptus you need to set the above variables and
you need to define:
export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus
If you're using boto profiles (requires boto>=2.24.0) you can choose a profile
using the --boto-profile command line argument (e.g. ec2.py --boto-profile prod) or using
the AWS_PROFILE variable:
AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml
so I ran it like this:
AWS_PROFILE=profile_xxxx ansible-playbook -i hosts launchec2.yml -vvv
but still got the same results...
----EDIT-----
I also ran it like this:
export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=<your aws access key here>
export AWS_SECRET_KEY=<your aws secret key here>
ansible-playbook -i hosts launchec2.yml
but still got this back...still seems to be a credentials issue?
usrxxx$ ansible-playbook -i hosts launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [launch ec2] **************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "usage: ec2.py [-h] [--list] [--host HOST] [--refresh-cache]\n [--profile BOTO_PROFILE]\nec2.py: error: unrecognized arguments: /Users/usrxxx/.ansible/tmp/ansible-tmp-1485531356.01-33528208838066/args\n", "module_stdout": "", "msg": "MODULE FAILURE"}
to retry, use: --limit #/Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
---EDIT 2------
Completely removed ansible and then installed with homebrew but got the same error....so I think went to the directory that its looking for ec2.py (Using module file /usr/local/Cellar/ansible/2.2.1.0/libexec/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py) and replaced that ec2.py with this one...https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py....but now get this error:
Using /Users/usrxxx/ansible/ansible.cfg as config file
PLAYBOOK: launchec2.yml ********************************************************
1 plays in launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [aws : launch ec2] ********************************************************
task path: /Users/usrxxx/Desktop/cloud-jumper/Ansible/roles/aws/tasks/main.yml:1
Using module file /usr/local/Cellar/ansible/2.2.1.0/libexec/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py
fatal: [localhost]: FAILED! => {
"failed": true,
"msg": "module (ec2) is missing interpreter line"
}
Seems you have placed ec2.py inventory script into your /path/to/playbook/library/ folder.
You should not put dynamic inventory scripts there – this way Ansible runs inventory script instead of ec2 module.
Remove ec2.py from your project's library folder (or Ansible global library defined in ansible.cfg) and try again.

Docker containers exiting without identifiable cause (Django web application)

I've taken over the maintenance of a live web project that utilizes docker containers. Immediately, I've noticed that the web app goes down after a couple of hours, and docker ps -a shows me:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9b02f1352f15 nginx:latest "nginx -g 'daemon off" 9 weeks ago Exited (1) 14 hours ago 80/tcp, 443/tcp, 0.0.0.0:80->8000/tcp ng01
8079b3d3b398 webapp_web "gunicorn --error-log" 9 weeks ago Exited (1) 14 hours ago 8000/tcp webapp_web_1
564fe0b72fa6 d0f5f9c3d3a6 "/bin/sh -c 'apt-get " 12 weeks ago Exited (0) 12 weeks ago modest_perlman
6cddbfcfa8f6 d0f5f9c3d3a6 "/bin/sh -c 'apt-get " 12 weeks ago Exited (0) 12 weeks ago backstabbing_goldwasser
7460be4f4451 postgres "/docker-entrypoint.s" 4 months ago Exited (1) 14 hours ago 5432/tcp webapp_db_1
Notice the 3 containers that exited 14 hours ago - those relate to the web app. How do I diagnose/fix this problem? Being a beginner, I'm struggling here. Thanks in advance! Following are some diagnostics I tried to run.
I used docker logs on the errant containers to see what could be going wrong.
docker logs 9b02f1352f15 (nginx) is empty.
docker logs 8079b3d3b398 (application server - gunicorn) shows many incidences of:
Exception in thread Thread-725265:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/local/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/local/lib/python2.7/site-packages/unirest/__init__.py", line 97, in __request
response = urllib2.urlopen(req, timeout=_timeout)
File "/usr/local/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/local/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/local/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/site-packages/poster/streaminghttp.py", line 142, in http_open
return self.do_open(StreamingHTTPConnection, req)
File "/usr/local/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 101] Network is unreachable>
docker logs 7460be4f4451 (postgresql backend) shows many incidences of:
LOG: database system was interrupted; last known up at 2017-01-22 12:42:46 UTC
LOG: database system was not properly shut down; automatic recovery in progress
LOG: invalid record length at 0/17AAD28
LOG: redo is not required
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
In case it matters, doing tail -f docker.log at /var/run/upstart/docker.log gives the output:
INFO[0000] Firewalld running: false
time="2017-01-23T14:38:47.142345718Z" level=error msg="devmapper: Error unmounting device 3da5c7e87cc8969249d7ed8b15c9cea9296feaefeba62fde534b4c183e4edbd4: Device is Busy"
time="2017-01-23T14:38:47.142542018Z" level=error msg="Error unmounting container 8079b3d3b3988a793537c4116bd12c70823415cb84068021d25658a316d8f568: Device is Busy"
INFO[0000] Firewalld running: false
INFO[0000] Firewalld running: false
time="2017-01-23T14:40:20.694963580Z" level=error msg="devmapper: Error unmounting device 6fd51632808dede3fea81b4f19fb84f1a13d93c38917e2845d0776de6a2ef941: Device is Busy"
time="2017-01-23T14:40:20.695010680Z" level=error msg="Error unmounting container 9b02f1352f15447acb7669bff918db0eeed58dc832fff565f4b2a4236474db1f: Device is Busy"
INFO[0000] Firewalld running: false
time="2017-01-23T14:41:27.307003457Z" level=error msg="devmapper: Error unmounting device 7c5a35dc9fc1929e57de43f5efc8e05a9442782c7496713d313c95fe62910f7b: Device is Busy"
time="2017-01-23T14:41:27.307059457Z" level=error msg="Error unmounting container 7460be4f445102274dd4aba4f113db23c17f58b9f771fc5c474d174766ae593c: Device is Busy"
I also tried docker inspect on all three. Following are the results relating to state - there seems to be nothing wrong here.
docker inspect 9b02f1352f15(nginx):
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 1,
"Error": "",
"StartedAt": "2017-01-22T12:42:47.142236155Z",
"FinishedAt": "2017-01-22T23:38:46.7038628Z"
},
docker inspect 8079b3d3b398 (application server - gunicorn):
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 1,
"Error": "",
"StartedAt": "2017-01-22T12:42:42.602662338Z",
"FinishedAt": "2017-01-22T23:38:46.5945186Z"
},
docker inspect 7460be4f4451 (postgresql backend):
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 1,
"Error": "",
"StartedAt": "2017-01-22T12:42:34.413283342Z",
"FinishedAt": "2017-01-22T23:38:46.5102334Z"
},
docker-compose.yml is simply:
version: '2'
services:
db:
image: postgres
web:
build: .
command: gunicorn --error-logfile err.log myapp.wsgi:application -b 0.0.0.0:8000
volumes:
- .:/code
expose:
- "8000"
depends_on:
- db
nginx:
image: nginx:latest
container_name: ng01
ports:
- "80:8000"
volumes:
- .:/src
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web

set ansible fact from concatenation of two lists

I wish to combine the output of two commands into one new variable that i can use as args for another command:
---
- hosts: '{{target}}'
tasks:
- name: determine storage nfs mount points
shell: /usr/sbin/showmount -d | grep -v Directories
register: nfs
ignore_errors: yes
- debug: var=nfs.stdout_lines
- name: determine storage xrd mount points
shell: df | grep /xrd | awk '{print $6}'
register: xrd
- debug: var=xrd.stdout_lines
- name: determine all mount points
set_fact: mounts="{{ nfs.stdout_lines }} + {{ xrd.stdout_lines }}"
- name: run gather script
script: gather.py {{mounts.stdout_lines|join(" ")}} > /tmp/gather.txt
register: gather
however, when i run it it comes out with:
PLAY [ltda-srv050] *************************************************************
TASK [setup] *******************************************************************
ok: [ltda-srv050]
TASK [determine storage nfs mount points] **************************************
fatal: [ltda-srv050]: FAILED! => {"changed": true, "cmd": "/usr/sbin/showmount -d | grep -v Directories", "delta": "0:00:00.011269", "end": "2016-09-14 23:48:14.489385", "failed": true, "rc": 1, "start": "2016-09-14 23:48:14.478116", "stderr": "clnt_create: RPC: Program not registered", "stdout": "", "stdout_lines": [], "warnings": []}
...ignoring
TASK [debug] *******************************************************************
ok: [ltda-srv050] => {
"nfs.stdout_lines": []
}
TASK [determine storage xrd mount points] **************************************
changed: [ltda-srv050]
TASK [debug] *******************************************************************
ok: [ltda-srv050] => {
"xrd.stdout_lines": [
"/xrd/cache1",
"/xrd/cache2",
"/xrd/cache3",
"/xrd/cache4",
"/xrd/cache5",
"/xrd/cache6",
"/xrd/cache7",
"/xrd/cache8",
"/xrd/cache9",
"/xrd/cache10",
"/xrd/cache11"
]
}
TASK [determine all mount points] **********************************************
ok: [ltda-srv050]
TASK [run gather script] *******************************************************
fatal: [ltda-srv050]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'list object' has no attribute 'stdout_lines'\n\nThe error appears to have been in '/afs/slac.stanford.edu/u/sf/ytl/work/storage/gather_file_attributes/retrieve_file_attributes.yml': line 21, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: run gather script\n ^ here\n"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'retrieve_file_attributes.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
ltda-srv050 : ok=6 changed=1 unreachable=0 failed=1
help...?
mounts is already a list, so removing it trying to call .stdout_lines on it fails - removing .stdout_lines works :)