I wrote a quick ansible playbook to launch a simple ec2 instance but I think I have an issue on how I want to authenticate.
What I don't want to do is set my aws access/secret keys as env variables since they expire each hour and I need to regenerate the ~/.aws/credentials file via a script.
Right now, my ansible playbook looks like this:
--- # Launch ec2
- name: Create ec2 instance
hosts: local
connection: local
gather_facts: false
vars:
profile: profile_xxxx
key_pair: usrxxx
region: us-east-1
subnet: subnet-38xxxxx
security_groups: ['sg-e54xxxx', 'sg-bfcxxxx', 'sg-a9dxxx']
image: ami-031xxx
instance_type: t2.small
num_instances: 1
tag_name: ansibletest
hdd_volumes:
- device_name: /dev/sdf
volume_size: 50
delete_on_termination: true
- device_name: /dev/sdh
volume_size: 50
delete_on_termination: true
tasks:
- name: launch ec2
ec2:
count: 1
key_name: "{{ key_pair }}"
profile: "{{ profile }}"
group_id: "{{ security_groups }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: false
volumes: "{{ hdd_volumes }}"
instance_tags:
Name: "{{ tag_name }}"
ASV: "{{ tag_asv }}"
CMDBEnvironment: "{{ tag_cmdbEnv }}"
EID: "{{ tag_eid }}"
OwnerContact: "{{ tag_eid }}"
register: ec2
- name: print ec2 vars
debug: var=ec
my hosts file is this:
[local]
localhost ansible_python_interpreter=/usr/local/bin/python2.7
I run my playbook like this:
ansible-playbook -i hosts launchec2.yml -vvv
and then get this back:
PLAYBOOK: launchec2.yml ********************************************************
1 plays in launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [launch ec2] **************************************************************
task path: /Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.yml:27
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: usrxxx
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730 `" && echo ansible-tmp-1485527483.82-106272618422730="` echo ~/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730 `" ) && sleep 0'
<localhost> PUT /var/folders/cx/_fdv7nkn6dz21798p_bn9dp9ln9sqc/T/tmpnk2rh5 TO /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py
<localhost> PUT /var/folders/cx/_fdv7nkn6dz21798p_bn9dp9ln9sqc/T/tmpEpwenH TO /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/env python /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args; rm -rf "/Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ec2"
},
"module_stderr": "usage: ec2.py [-h] [--list] [--host HOST] [--refresh-cache]\n [--profile BOTO_PROFILE]\nec2.py: error: unrecognized arguments: /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #/Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
I noticed in the ec2.py file it says this:
NOTE: This script assumes Ansible is being executed where the environment
variables needed for Boto have already been set:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
This script also assumes there is an ec2.ini file alongside it. To specify a
different path to ec2.ini, define the EC2_INI_PATH environment variable:
export EC2_INI_PATH=/path/to/my_ec2.ini
If you're using eucalyptus you need to set the above variables and
you need to define:
export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus
If you're using boto profiles (requires boto>=2.24.0) you can choose a profile
using the --boto-profile command line argument (e.g. ec2.py --boto-profile prod) or using
the AWS_PROFILE variable:
AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml
so I ran it like this:
AWS_PROFILE=profile_xxxx ansible-playbook -i hosts launchec2.yml -vvv
but still got the same results...
----EDIT-----
I also ran it like this:
export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=<your aws access key here>
export AWS_SECRET_KEY=<your aws secret key here>
ansible-playbook -i hosts launchec2.yml
but still got this back...still seems to be a credentials issue?
usrxxx$ ansible-playbook -i hosts launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [launch ec2] **************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "usage: ec2.py [-h] [--list] [--host HOST] [--refresh-cache]\n [--profile BOTO_PROFILE]\nec2.py: error: unrecognized arguments: /Users/usrxxx/.ansible/tmp/ansible-tmp-1485531356.01-33528208838066/args\n", "module_stdout": "", "msg": "MODULE FAILURE"}
to retry, use: --limit #/Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
---EDIT 2------
Completely removed ansible and then installed with homebrew but got the same error....so I think went to the directory that its looking for ec2.py (Using module file /usr/local/Cellar/ansible/2.2.1.0/libexec/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py) and replaced that ec2.py with this one...https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py....but now get this error:
Using /Users/usrxxx/ansible/ansible.cfg as config file
PLAYBOOK: launchec2.yml ********************************************************
1 plays in launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [aws : launch ec2] ********************************************************
task path: /Users/usrxxx/Desktop/cloud-jumper/Ansible/roles/aws/tasks/main.yml:1
Using module file /usr/local/Cellar/ansible/2.2.1.0/libexec/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py
fatal: [localhost]: FAILED! => {
"failed": true,
"msg": "module (ec2) is missing interpreter line"
}
Seems you have placed ec2.py inventory script into your /path/to/playbook/library/ folder.
You should not put dynamic inventory scripts there – this way Ansible runs inventory script instead of ec2 module.
Remove ec2.py from your project's library folder (or Ansible global library defined in ansible.cfg) and try again.
Related
This seems like such a basic task and yet it is failing.
I am simply trying to create a new directory and use it as a mount point for a GCP filestore. The VM is Debian10.
This is the code in main.yml in my role:
- name: install mount tools
ansible.builtin.apt:
name: nfs-common
state: present
- name: create filestore mount point
ansible.builtin.file:
path: /cloudapp_vol1
state: directory
mode: '0755'
owner: "{{ clouduser }}"
group: "{{ clouduser }}"
- name: mount filestore
ansible.posix.mount:
src: "{{ storage.filestore }}"
path: /cloudapp_vol1
opts: defaults
state: mounted
fstype: nfs
Everything runs fine until the last play, and errors with this:
fatal: [10.10.61.189]: FAILED! => {"changed": false, "msg": "Error mounting /cloudapp_vol1: mount: nfs: mount point does not exist.\n"}
What could I be doing wrong?
I'am using ansible to deploy filestore on GCP, I need to get IP from instance and use it to create mount point.
gcloud working fine but it's return bracket and simple quote with ip.
Someone can help me to remove these caractere please ? My regex command doesn't work and i'am newbie with regex.
Error tasks mount cannot resolv come from '' in ip.stdout
ansible code:
- name: get info
shell: gcloud filestore instances describe "{{nfs_id}}" --project=xxxx-xxxx --zone=xxxxx-b --format='get(networks.ipAddresses)'
register: ip
- name: master_setup.yml --> DEBUG REGEX
debug:
var: "{{ 'ip.stdout' | regex_replace('([^\\.]*)\\.(.+)$', '\\1') }}"
- name: print mount point test
debug:
msg: "{{ip.stdout}}:/{{nfs_name }}"
- name: Mount an NFS volume
mount:
fstype: nfs
state: mounted
opts: rw,sync,hard,intr
src: "{{ip.stdout}:/{{nfs_name }}"
path: /mnt/nexus-storage
result of ansible playbook execution
TASK [install_nexus : master_setup.yml --> DEBUG REGEX] ********************************************************
ok: [nexus-xxxx.xxx.xxxxx] => {
"ip": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": true,
"cmd": "ggcloud filestore instances describe "{{nfs_id}}" --project=xxxx-xxxx --zone=xxxxx-b --format='get(networks.ipAddresses)'",
"delta": "0:00:01.013823",
"end": "2021-03-14 21:23:32.398266",
"failed": false,
"rc": 0,
"start": "2021-03-14 21:23:31.384443",
"stderr": "",
"stderr_lines": [],
"stdout": "['1xx.xxx.xx.2']",
"stdout_lines": [
"['1xx.xxx.xx.2']"
]
}
}
TASK [install_nexus : print mount point test] ******************************************************************
ok: [nexus-xxxx.xxx.xxxxx] => {
"msg": "['1xx.xxx.xx.2']:/nfsnexusnew"
}
TASK [install_nexus : Mount an NFS volume] *********************************************************************
[WARNING]: sftp transfer mechanism failed on [nexus-ppd.preprod.d-aim.com]. Use ANSIBLE_DEBUG=1 to see detailed
information
fatal: [nexus-xxxx.xxx.xxxxx]: FAILED! => {"changed": false, "msg": "Error mounting /mnt/nexus-storage: mount.nfs: Failed to resolve server '1xx.xxx.xx.2': Name or service not known\n"}
Thx
Resolved doing this, it's not very sexy but it's working. If someone find a solution please forward me it.
I have used 2 regex because I don't how to remove simple quote and bracket in one line:
- name: get info
shell: gcloud filestore instances describe "{{nfs_id}}" --project=xxxx-xxxx --zone=xxxxx-b --format='get(networks.ipAddresses)' > /tmp/nfs-ip.txt
- name: sed regex to delete []
shell: sed -i 's/[][]//g' /tmp/nfs-ip.txt
- name: sed regex to delete ''
shell: sed -i 's|["'\'']||g' /tmp/nfs-ip.txt
- name: register result in var ip
shell: cat /tmp/nfs-ip.txt
register: ip
- name: Mount an NFS volume
mount:
fstype: nfs
state: mounted
opts: rw,sync,hard,intr
src: "{{ip.stdout}}:/{{nfs_name }}"
path: /mnt/nexus-storage
Q: "Cannot resolve ip.stdout"
A: The value stored in ip.stdout is a string
"ip": {
...
"stdout": "['1xx.xxx.xx.2']",
...
}
Use the filters from_yaml and first to get the first item of the list, e.g.
src: "{{ ip.stdout|from_yaml|first }}:/{{ nfs_name }}"
I'm new to Ansible, Ansible Tower, and AWS Cloud Formation and am trying to have Ansible Tower deploy an EC2 Container Service using a Cloud Formation template. I try to run the deploy job and am running into this error below.
TASK [create/update stack] *****************************************************
task path: /var/lib/awx/projects/_6__api/tasks/create_stack.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: awx
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790 `" && echo ansible-tmp-1470427494.79-207756006727790="` echo $HOME/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpgAsKKv TO /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-coqlkeqywlqhagfixtfpfotjgknremaw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 AWS_DEFAULT_REGION=us-west-2 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation; rm -rf "/var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/" > /dev/null 2>&1'"'"' && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "cloudformation"}, "module_stderr": "/bin/sh: /usr/bin/sudo: Permission denied\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
This is the create/update task:
---
- name: create/update stack
cloudformation:
stack_name: my-stack
state: present
template: templates/stack.yml
template_format: yaml
template_parameters:
VpcId: "{{ vpc_id }}"
SubnetId: "{{ subnet_id }}"
KeyPair: "{{ ec2_keypair }}"
DbUsername: "{{ db_username }}"
DbPassword: "{{ db_password }}"
InstanceCount: "{{ instance_count | default(1) }}"
tags:
Environment: test
register: cf_stack
- debug: msg={{ cf_stack }}
when: debug is defined
The playbook that Ansible Tower executes is a site.yml file:
---
- name: Deployment Playbook
hosts: localhost
connection: local
gather_facts: no
environment:
AWS_DEFAULT_REGION: "{{ lookup('env', 'AWS_DEFAULT_REGION') | default('us-west-2', true) }}"
tasks:
- include: tasks/create_stack.yml
- include: tasks/deploy_app.yml
This is what my playbook folder structure looks like:
/deploy
/group_vars
all
/library
aws_ecs_service.py
aws_ecs_task.py
aws_ecs_taskdefinition.py
/tasks
stack.yml
/templates
site.yml
I'm basing everything really on Justin Menga's pluralsight course "Continuous Delivery using Docker and Ansible", but he uses Jenkins, not Ansible Tower, which is probably why the disconnect. Anyway, hopefully that is enough information, let me know if I should also provide the stack.yml file. The files under the library directory are Menga's customized modules from his video course.
Thanks for reading all this and for any potential help! This is a link to his deploy playbook repository that I closely modeled everything after, https://github.com/jmenga/todobackend-deploy. Things that I took out are the DB RDS stuff.
If you look at the two last lines of the error message you can see that it is attempting to escalate privileges but failing:
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-coqlkeqywlqhagfixtfpfotjgknremaw; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 AWS_DEFAULT_REGION=us-west-2 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/cloudformation; rm -rf "/var/lib/awx/.ansible/tmp/ansible-tmp-1470427494.79-207756006727790/" > /dev/null 2>&1'"'"' && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "cloudformation"}, "module_stderr": "/bin/sh: /usr/bin/sudo: Permission denied\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
As this is a local task it is attempting to switch to the root user on the box that Ansible Tower is running on and the user presumably (and for good reason) doesn't have the privileges to do this.
With normal Ansible you can avoid this by not specifying the --become or -b flags on the command line or by specifying become: false in the task/play definition.
As you pointed out in the comments, with Ansible Tower it's a case of unticking the "Enable Privilege Escalation" option in the job template.
I followed these 3 guides:
http://docs.ansible.com/ansible/guide_aws.html
http://docs.ansible.com/ansible/ec2_module.html
https://gist.github.com/tristanfisher/e5a306144a637dc739e7
and I wrote this Ansible play
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- include_vars: aws_credentials.yml
- name: Creating EC2 Ubuntu instance
ec2:
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
key_name: my-aws-key
zone: us-west-2a
vpc_subnet_id: subnet-04199d61
group_id: sg-cf6736aa
assign_public_ip: yes
count: 1
wait: true
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: 10
instance_tags:
Name: ansible-test
Project: test
Ansible: manageable
register: ec2
then I run ansible-playbook create-ec2.yml -v --private-key ~/.ssh/my-key --vault-password-file ~/.password/to_ansible_vault
and I was getting this message
PLAY [localhost] ***************************************************************
TASK [include_vars] ************************************************************
ok: [localhost] => {"ansible_facts": {"ec2_access_key": "decrypted_acces_key_XXXXX", "ec2_secret_key": "decrypted_secret_key_XXXXX"}, "changed": false}
TASK [Creating EC2 Ubuntu instance] ********************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'create-ec2.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
when I ran ansible-vault view aws_credentials.yml --vault-password-file ~/.password/to_ansible_vault I got readable content of encrypted aws_credentials.yml,
something like this :
---
ec2_access_key: "XXXXX"
ec2_secret_key: "XXXXX"
Also when I used plain aws_credentials.yml, it doesn't work. Only when I export my credentials, it works without any failure.
Could somebody help me, how can I write playbook for creating ec2 instance with credentials stored in encrypted file?
I think you should supply your keys directly to ec2 module in this case.
Try this:
- name: Creating EC2 Ubuntu instance
ec2:
aws_access_key: "{{ ec2_access_key }}"
aws_secret_key: "{{ ec2_secret_key }}"
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
...
The code suggests that it only checks module's arguments and environment variables, not host variables.
Also you can export your AWS API keys to OS environment variables, like a:
export AWS_ACCESS_KEY=XXXXXXX
In that case in Ansible scenario you need to set:
- name: Creating EC2 Ubuntu instance
ec2:
aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY') }}"
aws_secret_key: "{{ lookup('env', 'AWS_SECRET_KEY') }}"
instance_type: t1.micro
image: ami-86e0ffe7
region: us-west-2
I am getting and error when I want to provision an ec2. This is how i set up my environment.
I put my aws credentials in ~/.boto
cat /etc/ansible/hosts
[local]
localhost
cat /etc/ansible/ec2-vars/testserver.yml
ec2_keypair: "ansible"
ec2_security_group: "sg-*******"
ec2_instance_type: "t2.micro"
ec2_image: "ami-********"
ec2_subnet_ids: ['subnet-*******','subnet-REDACTED','subnet-REDACTED']
ec2_region: "us-east-1"
ec2_tag_Name: "testserver"
ec2_tag_Type: "testserver"
ec2_tag_Environment: "development"
ec2_volume_size: 8
cat /etc/ansible/provision-ec2.yml
---
- hosts: localhost
connection: local
gather_facts: false
user: root
pre_tasks:
- include_vars: ec2_vars/{{type}}.yml
roles:
- provision-ec2
cat /etc/ansible/roles/provision-ec2/tasks/main.yml
---
- name: Provision EC2 Box
local_action:
module: ec2
key_name: "{{ ec2_keypair }}"
group_id: "{{ ec2_security_group }}"
instance_type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
vpc_subnet_id: "{{ ec2_subnet_ids|random }}"
region: "{{ ec2_region }}"
instance_tags: '{"Name":"{{ec2_tag_Name}}","Type":" {{ec2_tag_Type}}","Environment":"{{ec2_tag_Environment}}"}'
assign_public_ip: yes
wait: true
count: 1
volumes:
- device_name: /dev/sda1
device_type: gp2
volume_size: "{{ ec2_volume_size }}"
delete_on_termination: true
register: ec2
- debug: var=item
with_items: ec2.instances
- add_host: name={{ item.public_ip }} >
groups=tag_Type_{{ec2_tag_Type}},tag_Environment_{{ec2_tag_Environment}}
ec2_region={{ec2_region}}
ec2_tag_Name={{ec2_tag_Name}}
ec2_tag_Type={{ec2_tag_Type}}
ec2_tag_Environment={{ec2_tag_Environment}}
ec2_ip_address={{item.public_ip}}
with_items: ec2.instances
- name: Wait for the instances to boot by checking the ssh port
wait_for: host={{item.public_ip}} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
Now I run the following command and this is what i get.
[root#ip-**-**-*** ansible]# ansible-playbook -vv -i localhost, -e "type=testservers" provision-ec2.yml
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: provision-ec2.yml ****************************************************
1 plays in provision-ec2.yml
PLAY [localhost] ***************************************************************
TASK [include_vars] ************************************************************
task path: /etc/ansible/provision-ec2.yml:7
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "file": "/etc/ansible/ec2_vars/testservers.yml", "msg": "Source file not found."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #provision-ec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
please help.
New error:
TASK [provision-ec2 : Provision EC2 Box] ***************************************
task path: /etc/ansible/roles/provision-ec2/tasks/main.yml:2
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #provision-ec2.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
You are mixing underscore and hyphen.
cat /etc/ansible/ec2-vars/testserver.yml
include_vars: ec2_vars/{{type}}.yml