i am trying to download a file from s3 bucket , i did aws configure and also exported my access key and secret key but i am still getting the same error. Please suggest me
Code:
- name: Download xx tarball
s3:
bucket: xxx
object: folder/xx-commandline-4.0.3.tar.gz
dest: '/tmp/{{ xx_tarball }}'
mode: get
when: 'st.stat.exists == false'
Error:
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "No Authentication Handler found: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials "}
ansible --version
ansible 2.0.0.2
uname -a
Linux ip-xx-xxx-xx-x 4.4.0-1026-aws #35-Ubuntu SMP Thu Jul 20 21:59:09 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
You need to check couple of things:
I- boto is installed on the target host, where you need to download the file from s3:
sudo -H pip install boto
II- If this is remote host then use this format:
- name: Download xx tarball
s3:
aws_access_key: "{{ AWS_S3_ACCESS_KEY }}"
aws_secret_key: "{{ AWS_S3_SECRET_KEY }}"
bucket: xxx
object: folder/xx-commandline-4.0.3.tar.gz
dest: '/tmp/{{ xx_tarball }}'
mode: get
when: st.stat.exists == false
Note: when you export the AWS credentials, it work locally but not for the remote host, so you need to pass the credentials to the module so it will work for remote host.
Hope it might help you
Related
roles/tasks/main.yml code
- name: Download and unpack node exporter binary to /usr/local/bin
unarchive:
src: https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz
dest: /usr/local/bin/
remote_src: yes
extra_opts: [--strip-components=1]
owner: "ec2-user"
group: "ec2-user"
node-exporter.yml code
---
- hosts: node-exporter
become: false
gather_facts: false
roles:
- roles
error message
fatal: [ip]: FAILED! => {"changed": false, "msg": "Failure downloading https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz, Request failed: <urlopen error [Errno -2] Name or service not known>"}
if I run "ansible -m ping node-exporter", I receive pong. and "ping www.google.com" working well
but, this code not working
Help me how to solve this problem or recommend me any code ....
(I use amazon linux)
Something weird is happening there. At your role code you have https://github.com/ and in the error ssh://github.com, are you sure that you are using the last version of your code or something like that?
can you help me? This is my ansible script:
---
- hosts: "{{host_list}}"
remote_user: root
gather_facts: true
tasks:
- name: Check if Cloudwatch Agent is Installed Already
command: service status amazon-cloudwatch-agent
register: init_status_result
ignore_errors: yes
- debug:
var: init_status_result.stderr
verbosity: 4
- name: Create Directory for Downloading Cloudwatch Agent zip
file:
path: /opt/aws/amazon-cloudwatch-zip
state: directory
owner: root
group: root
mode: '0755'
recurse: no
when: init_status_result.stderr is search ("For other actions, please try to use systemctl")
I have this error when attempting to run my playbook (I just want a way really to run through the playbook if the status check of the cloudwatch agent service is not found.):
user1#ansible01-infra-mgnt:~/.ansible/playbooks/cw_agent$ ansible-playbook -K -i /home/user1/.ansible/etc/hosts --extra-vars="host_list=11.22.33.44" install_cw_agent.yml
SUDO password:
PLAY [11.22.33.44] **************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************************************************************
ok: [11.22.33.44]
TASK [Check if Cloudwatch Agent is Installed Already] ***************************************************************************************************************************************************************************************
[WARNING]: Consider using the service module rather than running service. If you need to use command because service is insufficient you can add warn=False to this command task or set command_warnings=False in ansible.cfg to get rid
of this message.
fatal: [11.22.33.44]: FAILED! => {"changed": false, "cmd": "service status amazon-cloudwatch-agent", "msg": "[Errno 2] No such file or directory", "rc": 2}
...ignoring
TASK [debug] ********************************************************************************************************************************************************************************************************************************
skipping: [11.22.33.44]
TASK [Create Directory for Downloading Cloudwatch Agent zip] ********************************************************************************************************************************************************************************
fatal: [11.22.33.44]: FAILED! => {"msg": "The conditional check 'init_status_result.stderr is search (\"For other actions, please try to use systemctl\")' failed. The error was: Unexpected templating type error occurred on ({% if init_status_result.stderr is search (\"For other actions, please try to use systemctl\") %} True {% else %} False {% endif %}): expected string or buffer\n\nThe error appears to have been in '/home/user1/.ansible/playbooks/cw_agent/install_cw_agent.yml': line 15, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Create Directory for Downloading Cloudwatch Agent zip\n ^ here\n"}
to retry, use: --limit #/home/user1/.ansible/playbooks/cw_agent/install_cw_agent.retry
PLAY RECAP **********************************************************************************************************************************************************************************************************************************
11.22.33.44 : ok=2 changed=0 unreachable=0 failed=1 ```
Try shell module to check the service status instead of command module .
shell: “service status service_name”
I have used Ansible to create 1 AWS EC2 instance using the examples in the Ansible ec2 documentation. I can successfully create the instance with a tag. Then I temporarily add it to my local inventory group using add_host.
After doing this, I am having trouble when I try to configure the newly created instance. In my Ansible play, I would like to specify the instance by its tag name. eg. hosts: <tag_name_here>, but I am getting an error.
Here is what I have done so far:
My directory layout is
inventory/
staging/
hosts
group_vars/
all/
all.yml
site.yml
My inventory/staging/hosts file is
[local]
localhost ansible_connection=local ansible_python_interpreter=/home/localuser/ansible_ec2/.venv/bin/python
My inventory/staging/group_vars/all/all.yml file is
---
ami_image: xxxxx
subnet_id: xxxx
region: xxxxx
launched_tag: tag_Name_NginxDemo
Here is my Ansible playbook site.yml
- name: Launch instance
hosts: localhost
gather_facts: no
tasks:
- ec2:
key_name: key-nginx
group: web_sg
instance_type: t2.micro
image: "{{ ami_image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
instance_tags:
Name: NginxDemo
exact_count: 1
count_tag:
Name: NginxDemo
exact_count: 1
register: ec2
- name: Add EC2 instance to inventory group
add_host:
hostname: "{{ item.public_ip }}"
groupname: tag_Name_NginxDemo
ansible_user: centos_user
ansible_become: yes
with_items: "{{ ec2.instances }}"
- name: Configure EC2 instance in launched group
hosts: tag_Name_NginxDemo
become: True
gather_facts: no
tasks:
- ping:
I run this playbook with
$ cd /home/localuser/ansible_ec2
$ source .venv/bin/activate
$ ansible-playbook -i inventory/staging site.yml -vvv`
and this creates the EC2 instance - the 1st play works correctly. However, the 2nd play gives the following error
TASK [.....] ******************************************************************
The authenticity of host 'xx.xxx.xxx.xx (xx.xxx.xxx.xx)' can't be established.
ECDSA key fingerprint is XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.
Are you sure you want to continue connecting (yes/no)? yes
fatal: [xx.xxx.xxx.xx]: FAILED! => {"changed": false, "module_stderr":
"Shared connection to xx.xxx.xxx.xx closed.\r\n", "module_stdout": "/bin/sh:
1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "rc": 127}
I followed the instructions from
this SO question to create the task with add_hosts
here to set gather_facts: False, but this still does not allow the play to run correctly.
How can I target the EC2 host using the tag name?
EDIT:
Additional info
This is the only playbook I have run to this point. I see this message requires Python but I cannot install Python on the instance as I cannot connect to it in my play Configure EC2 instance in launched group...if I could make that connection, then I could install Python (if this is the problem). Though, I'm not sure how to connect to the instance.
EDIT 2:
Here is my Python info on the localhost where I am running Ansible
I am running Ansible inside a Python venv.
Here is my python inside the venv
$ python --version
Python 2.7.15rc1
$ which python
~/ansible_ec2/.venv/bin/python
Here are my details about Ansible that I installed inside the Python venv
ansible 2.6.2
config file = /home/localuser/ansible_ec2/ansible.cfg
configured module search path = [u'/home/localuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/localuser/ansible_ec2/.venv/local/lib/python2.7/site-packages/ansible
executable location = /home/localuser/ansible_ec2/.venv/bin/ansible
python version = 2.7.15rc1 (default, xxxx, xxxx) [GCC 7.3.0]
Ok, so after a lot of searching, I found 1 possible workaround here. Basically, this workaround uses the lineinfile module and adds the new EC2 instance details to the hosts file permanently....not just for the in-memory plays following the add_host task. I followed this suggestion very closely and this approach worked for me. I did not need to use the add_host module.
EDIT:
The line I added in the lineinfile module was
- name: Add EC2 instance to inventory group
- lineinfile: line="{{ item.public_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./inventory/staging/hosts
with_items: "{{ ec2.instances }}"
I wrote a quick ansible playbook to launch a simple ec2 instance but I think I have an issue on how I want to authenticate.
What I don't want to do is set my aws access/secret keys as env variables since they expire each hour and I need to regenerate the ~/.aws/credentials file via a script.
Right now, my ansible playbook looks like this:
--- # Launch ec2
- name: Create ec2 instance
hosts: local
connection: local
gather_facts: false
vars:
profile: profile_xxxx
key_pair: usrxxx
region: us-east-1
subnet: subnet-38xxxxx
security_groups: ['sg-e54xxxx', 'sg-bfcxxxx', 'sg-a9dxxx']
image: ami-031xxx
instance_type: t2.small
num_instances: 1
tag_name: ansibletest
hdd_volumes:
- device_name: /dev/sdf
volume_size: 50
delete_on_termination: true
- device_name: /dev/sdh
volume_size: 50
delete_on_termination: true
tasks:
- name: launch ec2
ec2:
count: 1
key_name: "{{ key_pair }}"
profile: "{{ profile }}"
group_id: "{{ security_groups }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: false
volumes: "{{ hdd_volumes }}"
instance_tags:
Name: "{{ tag_name }}"
ASV: "{{ tag_asv }}"
CMDBEnvironment: "{{ tag_cmdbEnv }}"
EID: "{{ tag_eid }}"
OwnerContact: "{{ tag_eid }}"
register: ec2
- name: print ec2 vars
debug: var=ec
my hosts file is this:
[local]
localhost ansible_python_interpreter=/usr/local/bin/python2.7
I run my playbook like this:
ansible-playbook -i hosts launchec2.yml -vvv
and then get this back:
PLAYBOOK: launchec2.yml ********************************************************
1 plays in launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [launch ec2] **************************************************************
task path: /Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.yml:27
Using module file /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: usrxxx
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730 `" && echo ansible-tmp-1485527483.82-106272618422730="` echo ~/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730 `" ) && sleep 0'
<localhost> PUT /var/folders/cx/_fdv7nkn6dz21798p_bn9dp9ln9sqc/T/tmpnk2rh5 TO /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py
<localhost> PUT /var/folders/cx/_fdv7nkn6dz21798p_bn9dp9ln9sqc/T/tmpEpwenH TO /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/env python /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/ec2.py /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args; rm -rf "/Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ec2"
},
"module_stderr": "usage: ec2.py [-h] [--list] [--host HOST] [--refresh-cache]\n [--profile BOTO_PROFILE]\nec2.py: error: unrecognized arguments: /Users/usrxxx/.ansible/tmp/ansible-tmp-1485527483.82-106272618422730/args\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #/Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
I noticed in the ec2.py file it says this:
NOTE: This script assumes Ansible is being executed where the environment
variables needed for Boto have already been set:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
This script also assumes there is an ec2.ini file alongside it. To specify a
different path to ec2.ini, define the EC2_INI_PATH environment variable:
export EC2_INI_PATH=/path/to/my_ec2.ini
If you're using eucalyptus you need to set the above variables and
you need to define:
export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus
If you're using boto profiles (requires boto>=2.24.0) you can choose a profile
using the --boto-profile command line argument (e.g. ec2.py --boto-profile prod) or using
the AWS_PROFILE variable:
AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml
so I ran it like this:
AWS_PROFILE=profile_xxxx ansible-playbook -i hosts launchec2.yml -vvv
but still got the same results...
----EDIT-----
I also ran it like this:
export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=<your aws access key here>
export AWS_SECRET_KEY=<your aws secret key here>
ansible-playbook -i hosts launchec2.yml
but still got this back...still seems to be a credentials issue?
usrxxx$ ansible-playbook -i hosts launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [launch ec2] **************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "usage: ec2.py [-h] [--list] [--host HOST] [--refresh-cache]\n [--profile BOTO_PROFILE]\nec2.py: error: unrecognized arguments: /Users/usrxxx/.ansible/tmp/ansible-tmp-1485531356.01-33528208838066/args\n", "module_stdout": "", "msg": "MODULE FAILURE"}
to retry, use: --limit #/Users/usrxxx/Desktop/cloud-jumper/Ansible/launchec2.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
---EDIT 2------
Completely removed ansible and then installed with homebrew but got the same error....so I think went to the directory that its looking for ec2.py (Using module file /usr/local/Cellar/ansible/2.2.1.0/libexec/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py) and replaced that ec2.py with this one...https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py....but now get this error:
Using /Users/usrxxx/ansible/ansible.cfg as config file
PLAYBOOK: launchec2.yml ********************************************************
1 plays in launchec2.yml
PLAY [Create ec2 instance] *****************************************************
TASK [aws : launch ec2] ********************************************************
task path: /Users/usrxxx/Desktop/cloud-jumper/Ansible/roles/aws/tasks/main.yml:1
Using module file /usr/local/Cellar/ansible/2.2.1.0/libexec/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py
fatal: [localhost]: FAILED! => {
"failed": true,
"msg": "module (ec2) is missing interpreter line"
}
Seems you have placed ec2.py inventory script into your /path/to/playbook/library/ folder.
You should not put dynamic inventory scripts there – this way Ansible runs inventory script instead of ec2 module.
Remove ec2.py from your project's library folder (or Ansible global library defined in ansible.cfg) and try again.
I'm using ansible to deploy my application.
I'm came to the point where I want to upload my grunted assets to a newly created bucket, here is what I have done:
{{hostvars.localhost.public_bucket}} is the bucket name,
{{client}}/{{version_id}}/assets/admin is the path to a folder containing Multi-levels folders and assets to upload:
- s3:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
object: "{{client}}/{{version_id}}/assets/admin"
src: "{{trunk}}/public/assets/admin"
mode: put
Here is the error message:
fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false}
I went through the documentation and I didn't find recursing option for ansible s3_module.
Is this a bug or am I missing something?
As of Ansible 2.3, you can use: s3_sync:
- name: basic upload
s3_sync:
bucket: tedder
file_root: roles/s3/files/
Note: If you're using a non-default region, you should set region explicitly, otherwise you get a somewhat obscure error along the lines of: An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request
Here's a complete playbook matching what you were trying to do above:
- hosts: localhost
vars:
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
bucket: "{{hostvars.localhost.public_bucket}}"
tasks:
- name: Upload files
s3_sync:
aws_access_key: '{{aws_access_key}}'
aws_secret_key: '{{aws_secret_key}}'
bucket: '{{bucket}}'
file_root: "{{trunk}}/public/assets/admin"
key_prefix: "{{client}}/{{version_id}}/assets/admin"
permission: public-read
region: eu-central-1
Notes:
You could probably remove region, I just added it to exemplify my point above
I've just added the keys to be explicit. You can (and probably should) use environment variables for this:
From the docs:
If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence AWS_URL or EC2_URL, AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY or EC2_ACCESS_KEY, AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY or EC2_SECRET_KEY, AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN, AWS_REGION or EC2_REGION
The ansible s3 module does not support directory uploads, or any recursion.
For this tasks, I'd recommend using s3cmd check below syntax.
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
By using ansible, it looks like you wanted something idempotent, but ansible doesn't support yet s3 directory uploads or any recursion, so you probably should use the aws cli to do the job like this:
command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"
I was able to accomplish this using the s3 module by iterating over the output of the directory listing i wanted to upload. The little inline python script i'm running via the command module just outputs the full list if files paths in the directory, formatted as JSON.
- name: upload things
hosts: localhost
connection: local
tasks:
- name: Get all the files in the directory i want to upload, formatted as a json list
command: python -c 'import os, json; print json.dumps([os.path.join(dp, f)[2:] for dp, dn, fn in os.walk(os.path.expanduser(".")) for f in fn])'
args:
chdir: ../../styles/img
register: static_files_cmd
- s3:
bucket: "{{ bucket_name }}"
mode: put
object: "{{ item }}"
src: "../../styles/img/{{ item }}"
permission: "public-read"
with_items: "{{ static_files_cmd.stdout|from_json }}"