Using aws_secret in ansible - amazon-web-services

I'm trying to retrieve password from aws secret manager using ansible 2.8 using lookup.
Below things are not working for me:
In .bashrc, I have exported region
Ansible Environment Variables in task
Setting up ansible variables in pre_tasks
- hosts: StagingApps
remote_user: staging
gather_facts: false
tasks:
- debug:
var: "{{ lookup('aws_secret', 'staging_mongodb_pass', region='us-east-1') }}"
msg: "{{ query('aws_secret', 'staging_mongodb_pass', region='us-east-1') }}"
environment:
region: 'us-east-1'
Error Message:
FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'aws_secret'. Error was a , original message: 'Requested entry (plugin_type: lookup plugin: aws_secret setting: region ) was not defined in configuration.'"}

below playbook has worked for me
- name: "register mongodb from secretsmanager"
shell: "aws secretsmanager get-secret-value --secret-id staging_mongodb"
register: mongodb_pass
delegate_to: 127.0.0.1
- set_fact:
mongodb_pass_dict: "{{ mongodb_pass.stdout | from_json | json_query('SecretString') }}"
- set_fact:
mongodb_pass_list: "{{ ['staging_mongodb'] | map('extract', mongodb_pass_dict) | list }}"
- set_fact:
mongodb_pass: "{{ mongodb_pass_list[0] }}"
- template:
src: application.properties.j2
dest: application.properties
mode: 0644
backup: yes

It looks like Ansible released this lookup plugin in a broken state. They have an issue and a PR open to fix it:
https://github.com/ansible/ansible/issues/54790
https://github.com/ansible/ansible/pull/54792
Very disappointing, as I've been waiting for this plugin for many months.

Related

Use of module "sts_assume_role" under community.aws or amazon.aws collection

Getting below error while running a playbook by making use of module "sts_assume_role" under community.aws or amazon.aws collection which is already installed on ansible server.
# Note: These examples do not set authentication details, see the AWS Guide for details.
name: testing assume role
user: ec2-user
hosts: localhost
gather_facts: true
collections:
community.aws.sts_assume_role
tasks:enter code here
Name: assume role
local_action:
sts_assume_role:
role_arn: "arn:aws:iam::123456789:role/test_iam_role"
role_session_name: "MySession"
register: assumed_role
run_once: True
Use the assumed role above to tag an instance in account 123456789
- name: sts token
local_action:
sts_assume_role:
aws_access_key: "{{ assumed_role.sts_creds.access_key }}"
aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}"
security_token: "{{ assumed_role.sts_creds.session_token }}"
run_once: True
- Name: check current assume role
shell: aws sts get-caller-identity
PLease find attached picture for error.
Additional info:
Ansible version: 2.9.10
Boto3 version: 1.16.0
botocore version: 1.19.63
python version: 3.6.8

Create and setup GCP VM's with ansible, ssh Permission denied (publickey)

Before executing the playbook i have created a service account and given it the permissions for "Compute admin", "OS Login admin" and "service account user". Then i downloaded the json key on my machine. The service account state is "active".
On my machine i wrote a playbook to set up one gcp VM and install apache and copy there a dummy webpage.
- name: Create Compute Engine instances
hosts: localhost
gather_facts: no
vars:
gcp_project: ansible-xxxxxx
gcp_cred_kind: serviceaccount
gcp_cred_file: ~/ansible-key.json
zone: "us-central1-a"
region: "us-central1"
machine_type: "n1-standard-1"
image: "projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts"
tasks:
- name: Create an IP address for instance
gcp_compute_address:
name: "{{ zone }}-ip"
region: "{{ region }}"
project: "{{ gcp_project }}"
service_account_file: "{{ gcp_cred_file }}"
auth_kind: "{{ gcp_cred_kind }}"
register: gce_ip
- name: Bring up the instance in the zone.
gcp_compute_instance:
name: "{{ zone }}"
machine_type: "{{ machine_type }}"
disks:
- auto_delete: true
boot: true
initialize_params:
source_image: "{{ image }}"
network_interfaces:
- access_configs:
- name: External NAT
nat_ip: "{{ gce_ip }}"
type: ONE_TO_ONE_NAT
tags:
items:
- http-server
- https-server
zone: "{{ zone }}"
project: "{{ gcp_project }}"
service_account_file: "{{ gcp_cred_file }}"
auth_kind: "{{ gcp_cred_kind }}"
register: gce
...after instantiating the VM i connect to it via ssh...
post_tasks:
- name: Wait for SSH for instance
wait_for: delay=5 sleep=5 host={{ gce_ip.address }} port=22 state=started timeout=100
- name: Save host data for first zone
add_host: hostname={{ gce_ip.address }} groupname=gce_instances_ips
the ansible-playbook never passes this step,
to call it i use ansible-playbook main.yaml --user sa_123456789 and
the given error is either a
fatal: [130.211.225.130]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: sa_104318085248975873144#130.211.225.130: Permission denied (publickey).", "unreachable": true}
or a simple timeout
fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 105, "msg": "Timeout when waiting for 130.211.225.130:22"}
In the metadata of GCE I also set enable-oslogin to TRUE.
The VM is created without any problem and is accessible by using the GCP console (GUI). If I try to access via ssh with keys generated privately the machine seems to be unreachable.
Does anyone have experience with this type of error?
This error usually occurs when there is no valid public and private key generated and setup.
Try any of the following approaches:
Create/edit your ansible.cfg file in your playbook directory and add a line for the full path of your key:
[defaults]
privatekeyfile = /Users/username/.ssh/private_key
It sets private key globally for all hosts in your playbook.
Add the private key to your playbook using the following line:
vars:
ansible_ssh_private_key_file: "/home/ansible/.ssh/id_rsa"
You can also define the private key to use directly in command line:
ansible-playbook -vvvv --private-key=/Users/you/.ssh/your_key playbookname.yml

ansible ec2_instance_facts filter by "tag:Name" does not filter by instance Name

I want to run ec2_instance_facts to find an instance by name. However, I must be doing something wrong because I cannot get the filter to actually work. The following returns everything in my set AWS_REGION:
- ec2_instance_facts:
filters:
"tag:Name": "{{myname}}"
register: ec2_metadata
- debug: msg="{{ ec2_metadata.instances }}"
The answer is to use the ec2_remote_facts module, not the ec2_instance_facts module.
- ec2_remote_facts:
filters:
"tag:Name": "{{myname}}"
register: ec2_metadata
- debug: msg="{{ ec2_metadata.instances }}"
Based on the documentation ec2_remote_facts is marked as DEPRECATED from ansible version 2.8 in favor of using ec2_instance_facts.
This is working good for me:
- name: Get instances list
ec2_instance_facts:
region: "{{ region }}"
filters:
"tag:Name": "{{ myname }}"
register: ec2_list
- debug: msg="{{ ec2_metadata.instances }}"
Maybe the filte is not being applied? Can you go through the results in the object?

Ansible querying AWS AMIs

I'm trying to query AWS EC2 AMIs from Ansible but keep running into an error when looping through the results:
- hosts: localhost
tasks:
- name: Get AMI
ec2_ami_facts:
owner: amazon
filters:
architecture: x86_64
root-device-type: ebs
register: amis
- name: return filtered data
debug:
msg: "{{ item }}"
loop: " {{ amis \
| json_query( 'Images[?Description!=`null`] \
| [?starts_with(Description,`Amazon Linux`)]' ) \
}} "
The idea is to return the image documents, and later just the image IDs with more filtering (end goal is to get the most recent ami id for a given description). But with the current example, and anything else I try I get this error:
TASK [return filtered data] ****************************************************
fatal: [localhost]: FAILED! => {"msg": "Invalid data passed to 'loop',
it requires a list, got this instead: . Hint: If you passed a
list/dict of just one element, try adding wantlist=True to your lookup
invocation or use q/query instead of lookup."}
I can look at the 'amis' in its entirety and it looks good, but any filtering I try fails. What is the correct method?
This works, thanks for the folks at #ansible on freenode.
- hosts: localhost
tasks:
- name: Get AMI
ec2_ami_facts:
owner: amazon
filters:
architecture: x86_64
root-device-type: ebs
register: amis
- name: return latest AMI
set_fact:
my_ami: "{{ amis.images \
| selectattr('description', 'defined') \
| selectattr('description', 'match', '^Amazon Linux.*GP2$') \
| selectattr('description', 'match', '[^(Candidate)]') \
| sort(attribute='creation_date') \
| last }} "
- debug:
msg: "ami = {{ my_ami | to_nice_yaml }}"
Also see here: https://bitbucket.org/its-application-delivery/ansible-aws/src/master/ansible/task_find_ami.yml?fileviewer=file-view-default
Use following to dynamically fetch the latest AMI.
---
- name: Find latest AMI
ec2_ami_facts:
owners: 099720109477
region: "{{ AWS_REGION }}"
filters:
name: "ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"
register: findami
- name: Sort the latest AMI
set_fact:
latest_ami: >
{{ findami.images | sort(attribute='creation_date') | last }}
- name: Launch Instance with latest AMI
ec2:
instance_type: "{{ INSTANCE_TYPE }}"
image: "{{ latest_ami.image_id }}"
key_name: "{{ KEY_NAME }}"
region: "{{ AWS_REGION }}"
group_id: "{{ sg.group_id }}"
wait: yes
count: "{{ INSTANCES_COUNT }}"
vpc_subnet_id: "{{ subnet.subnet.id }}"
assign_public_ip: no

Creating n new instances in AWS EC2 VPC and then configuring them

I'm having a really hard time doing what seems like a fairly standard task so I'm hoping somebody can help me. I've googled this like crazy and most of the examples are not in VPC or use deprecated structure that makes them wrong or unusable in my use case.
Here are my goals:
I want to launch a whole mess of new instances in my VPC (the same
code below has 3 but it could be a hundred)
I want to wait for thoseinstances to come alive
I then want to configure those instances (ssh into them, change
hostname, enable some services, etc. etc.)
Now I could probably do this in 2 tasks. I could create the instances in 1 playbook. Wait for them to settle down. Then run a 2nd playbook to configure them. That's probably what I'm going to do now because I want to get moving - but there has to be a one shot answer to this.
Here's what I have so far for a playbook
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 delay=60 timeout=900 state=started
with_items: '{{ec2.instances}}'
- name: Update hostname on instances
hostname: name={{ item.private_ip }}
with_items: '{{ec2.instances}}'
And that doens't work. What I get is
TASK [Wait for SSH to come up] *************************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
TASK [Update hostname on instances] ********************************************
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the future this will be a fatal error.. This feature will be removed in a future release. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Which makes me sad. Now this is my latest incarnation of that playbook. But I've tried to rewrite it using every example I can find on the internet. Most of them have with_items written in a different way, but ansible tells me that way is depricated, and then fails.
So far ansible has been fun and easy, but this is making me want to toss my laptop across the street.
Any suggestions? Should I be using register and with_items at all? Would I be better off using something like this:
add_host: hostname={{item.public_ip}} groupname=deploy
instead? I'm wide open to a rewrite here. I'm going to go write this up in 2 playbooks and would love to get suggestions.
Thanks!
****EDIT****
Now it's just starting to feel broken or seriously changed. I've googled dozens of examples and they all are written the same way and they all fail with the same error. This is my simple playbook now:
---
- hosts: localhost
connection: local
gather_facts: False
vars:
builderstart: 93
builderend: 94
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: dakey
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: True
group_id: sg-OU812
exact_count: 1
count_tag:
Name: "{{ item }}"
instance_tags:
Name: "{{ item }}"
role: "dostuff"
extracheese: "True"
register: ec2
with_sequence: start="{{builderstart}}" end="{{builderend}}" format=builder%03d
- name: the newies
debug: msg="{{ item }}"
with_items: "{{ ec2.instances }}"
It really couldn't be more straight forward. No matter how I write it, no matter how I vary it, I get the same basic error:
[DEPRECATION WARNING]: Skipping task due to undefined Error, in the
future this will be a fatal error.: 'dict object' has no attribute
'instances'.
So it looks like it's the with_items: "{{ ec2.instances }}" line that's causing the error.
I've used debug to print out ec2 and that error looks accurate. It looks like the structure changed to me. It looks like ec2 now contains a dictionary with results as a key to another dictionary object and that instances is a key in that dictionary. But I can't find a sane way to access the data.
For what it's worth, I've tried accessing this in 2.0.1, 2.0.2, and 2.2 and I get the same problem in every case.
Are the rest of you using 1.9 or something? I can't find an example anywhere that works. It's very frustrating.
Thanks again for any help.
Don't do it like this:
- name: Provision Lunch
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
ec2:
region: us-east-1
Because by using it you flushing all info from ec2 in your item.
You receiving following output:
TASK [Launch instance] *********************************************************
changed: [localhost] => (item={u'hostname': u'eggroll1'})
changed: [localhost] => (item={u'hostname': u'eggroll2'})
but item should be like this:
changed: [localhost] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-29-85.ec2.internal', u'public_ip': u'54.208.138.217', u'private_ip': u'172.31.29.85', u'id': u'i-003b63636e7ffc27c', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-37581295'}}, u'key_name': u'eggfooyong', u'image_id': u'ami-fce3c696', u'tenancy': u'default', u'groups': {u'sg-aabbcc34': u'ssh'}, u'public_dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'state_code': 16, u'tags': {u'ansibleowned': u'True', u'role': u'supper'}, u'placement': u'us-east-1d', u'ami_launch_index': u'1', u'dns_name': u'ec2-54-208-138-217.compute-1.amazonaws.com', u'region': u'us-east-1', u'launch_time': u'2016-04-19T08:19:16.000Z', u'instance_type': u't2.micro', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'})
Try to use following code
- name: Create a sandbox instance
hosts: localhost
gather_facts: False
vars:
keypair: eggfooyong
instance_type: t2.micro
security_group: ssh
image: ami-8675309
region: us-east-1
subnet: subnet-8675309
instance_names:
- eggroll1
- eggroll2
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: no
count: "{{ instance_names | length }}"
register: ec2
- name: tag instances
ec2_tag:
resource: '{{ item.0.id }}'
region: '{{ region }}'
tags:
Name: '{{ item.1 }}'
role: "supper"
ansibleowned: "True"
with_together:
- '{{ ec2.instances }}'
- '{{ instance_names }}'
- name: Wait for SSH to come up
wait_for: host={{ private_ip }} port=22 delay=60 timeout=320 state=started
with_items: '{{ ec2.instances }}'
Assumption that your ansible host located inside of VPC
To achieve this goal, I have written a really small filter plugin get_ec2_info.
Create a directory with the named filter_plugins
Create a plugin file get_ec2_info.py with the following content:
from jinja2.utils import soft_unicode
class FilterModule(object):
def filters(self):
return {
'get_ec2_info': get_ec2_info,
}
def get_ec2_info(list, ec2_key):
ec2_info = []
for item in list:
for ec2 in item['instances']:
ec2_info.append(ec2[ec2_key])
return ec2_info
Then you can use this in your playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Provision Lunch
ec2:
region: us-east-1
key_name: eggfooyong
vpc_subnet_id: subnet-8675309
instance_type: t2.micro
image: ami-8675309
wait: true
group_id: sg-8675309
exact_count: 1
count_tag:
Name: "{{ item.hostname }}"
instance_tags:
Name: "{{ item.hostname }}"
role: "supper"
ansibleowned: "True"
register: ec2
with_items:
- hostname: eggroll1
- hostname: eggroll2
- hostname: eggroll3
- name: Create SSH Group to login dynamically to EC2 Instance(s)
add_host:
hostname: "{{ item }}"
groupname: my_ec2_servers
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
- name: Wait for SSH to come up on EC2 Instance(s)
wait_for:
host: "{{ item }}"
port: 22
state: started
with_items: "{{ ec2.results | get_ec2_info('public_ip') }}"
# CALL THE DYNAMIC GROUP IN THE SAME PLAYBOOK
- hosts: my_ec2_servers
become: yes
remote_user: ubuntu
gather_facts: yes
tasks:
- name: DO YOUR TASKS HERE
EXTRA INFORMAITON:
using ansible 2.0.1.0
assuming you are spinning up ubuntu instances, if not then change the value in remote_user: ubuntu
assuming ssh key is properly configured
Please consult these github repos for more help:
ansible-aws-role-1
ansible-aws-role-2
I thinks this would be helpful for debug.
https://www.middlewareinventory.com/blog/ansible-dict-object-has-no-attribute-stdout-or-stderr-how-to-resolve/
The ec2 register is a dict type. And it has a key results.
results key has many elements including dict and list like below:
{
"msg": {
"results": [
{
"invocation": {
},
"instances": [],
"changed": false,
"tagged_instances": [
{
}
],
"instance_ids": null,
"failed": false,
"item": [
],
"ansible_loop_var": "item"
}
],
"msg": "All items completed",
"changed": false
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
So, you can get the desired data using ., for instance, item.changed which has false boolean value.
- debug:
msg: "{{ item.changed }}"
loop: "{{ ec2.results }}"