On my local machine I set the following environment vars:
export AWS_ACCESS_KEY='xxxx'
export AWS_SECRET_KEY='xxxx'
export AWS_REGION='us-east-1'
then in a playbook I put this:
...
tasks:
- name: Get some secrets
vars:
db_password: "{{ lookup('amazon.aws.aws_secret', 'DB_PASSWORD') }}"
debug:
msg: "{{ db_password }}"
...
When running the playbook the connection to AWS secrets works just fine, the necessary AWS variables are taken from the environment and I get the proper value in db_password.
When I'm trying to do the same in AWX, I set the above three variables in the section Settings > Job Settings > Extra Environment Variables:
{
"AWS_ACCESS_KEY": "xxx",
"AWS_SECRET_KEY": "xxx",
"AWS_REGION": "us-east-1"
}
Now, when I'm running a playbook from AWX containing the above code "{{ lookup('amazon.aws.aws_secret', 'DB_PASSWORD') }}" I get the error that I need to specify a region and if I set the region manually like "{{ lookup('amazon.aws.aws_secret', 'DB_PASSWORD', region='us-east-1') }}" I get the error that AWX can't find the credentials.
So, for some reason these three variables are not read from the extra environment variables.
To make it work I had to write the following code in the playbook:
region: "{{ lookup('env', 'AWS_REGION') }}"
aws_access_key: "{{ lookup('env', 'AWS_ACCESS_KEY') }}"
aws_secret_key: "{{ lookup('env', 'AWS_SECRET_KEY') }}"
db_password: "{{ lookup('amazon.aws.aws_secret', 'DB_PASSWORD', aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, region=region) }}"
But I don't like this solution and I would prefer to avoid to explicitly set those three vars in the lookup and somehow tell AWX to take these three values from the extra environment variables. Is there any way to achieve this?
Related
For cost reasons, our ASG's in the QA environment run with desired/min/max capacity set to "1". That's not the case for Production but since we use the same code for QA and Prod deployment (minus a few variables of course) this is causing problems with the QA automation jobs.
- name: create autoscale groups original_lc
ec2_asg:
name: "{{ app_name }}"
target_group_arns: "{{alb_target_group_facts.target_groups[0].target_group_arn}}"
launch_config_name: "{{ launch_config.name }}"
min_size: 1
max_size: 1
desired_capacity: 1
region: "{{ region }}"
vpc_zone_identifier: "{{ subnets | join(',') }}"
health_check_type: "{{health_check}}"
replace_all_instances: yes
wait_for_instances: false
replace_batch_size: '{{ rollover_size }}'
lc_check: false
default_cooldown: "{{default_cooldown}}"
health_check_period: "{{health_check_period}}"
notification_topic: "{{redeem_notification_group}}"
tags:
- Name : "{{ app_name }}"
- Application: "{{ tag_Application }}"
- Product_Owner: "{{ tag_Product_Owner }}"
- Resource_Owner: "{{ tag_Resource_Owner }}"
- Role: "{{ tag_Role }}"
- Service_Category: "{{ tag_Service_Category }}"
register: asg_original_lc
On the first run, the "ec2_asg" module creates the group properly, with the correct desired/min/max settings.
But when we run the job a second time to update the same ASG, it changes desired/min/max to "2" in AWS. We don't want that. We just want it to rotate out that one instance in the group. Is there a way to achieve that?
I'm trying to retrieve password from aws secret manager using ansible 2.8 using lookup.
Below things are not working for me:
In .bashrc, I have exported region
Ansible Environment Variables in task
Setting up ansible variables in pre_tasks
- hosts: StagingApps
remote_user: staging
gather_facts: false
tasks:
- debug:
var: "{{ lookup('aws_secret', 'staging_mongodb_pass', region='us-east-1') }}"
msg: "{{ query('aws_secret', 'staging_mongodb_pass', region='us-east-1') }}"
environment:
region: 'us-east-1'
Error Message:
FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'aws_secret'. Error was a , original message: 'Requested entry (plugin_type: lookup plugin: aws_secret setting: region ) was not defined in configuration.'"}
below playbook has worked for me
- name: "register mongodb from secretsmanager"
shell: "aws secretsmanager get-secret-value --secret-id staging_mongodb"
register: mongodb_pass
delegate_to: 127.0.0.1
- set_fact:
mongodb_pass_dict: "{{ mongodb_pass.stdout | from_json | json_query('SecretString') }}"
- set_fact:
mongodb_pass_list: "{{ ['staging_mongodb'] | map('extract', mongodb_pass_dict) | list }}"
- set_fact:
mongodb_pass: "{{ mongodb_pass_list[0] }}"
- template:
src: application.properties.j2
dest: application.properties
mode: 0644
backup: yes
It looks like Ansible released this lookup plugin in a broken state. They have an issue and a PR open to fix it:
https://github.com/ansible/ansible/issues/54790
https://github.com/ansible/ansible/pull/54792
Very disappointing, as I've been waiting for this plugin for many months.
I want to run ec2_instance_facts to find an instance by name. However, I must be doing something wrong because I cannot get the filter to actually work. The following returns everything in my set AWS_REGION:
- ec2_instance_facts:
filters:
"tag:Name": "{{myname}}"
register: ec2_metadata
- debug: msg="{{ ec2_metadata.instances }}"
The answer is to use the ec2_remote_facts module, not the ec2_instance_facts module.
- ec2_remote_facts:
filters:
"tag:Name": "{{myname}}"
register: ec2_metadata
- debug: msg="{{ ec2_metadata.instances }}"
Based on the documentation ec2_remote_facts is marked as DEPRECATED from ansible version 2.8 in favor of using ec2_instance_facts.
This is working good for me:
- name: Get instances list
ec2_instance_facts:
region: "{{ region }}"
filters:
"tag:Name": "{{ myname }}"
register: ec2_list
- debug: msg="{{ ec2_metadata.instances }}"
Maybe the filte is not being applied? Can you go through the results in the object?
I am working on autoscaling ansible project can somebody tell me how I can delete old launch configuration using ansible playbooks.
Thanks
Some time ago, I've created a pull request for a new Ansible module that you could use to find old launch configurations, and delete them.
For example, if you'd like to keep the top 10 most recent launch configurations, and remove old ones, you could have a task like:
---
- name: "Find old Launch Configs"
ec2_lc_find:
profile: "{{ boto_profile }}"
region: "{{ aws_region }}"
name_regex: "*nameToFind*"
sort: true
sort_end: -10
register: old_launch_config
- name: "Remove old Launch Configs"
ec2_lc:
profile: "{{ boto_profile }}"
region: "{{ aws_region }}"
name: "{{ item.name }}"
state: absent
with_items: old_launch_config.results
ignore_errors: yes
I am trying to use Ansible to create an EC2 instance, configure a web server and then register it to a load balancer. I have no problem creating the EC2 instance, nor configuring the web server but all attempts to register it against an existing load balancer fail with varying errors depending on the code I use.
Has anyone had success in doing this?
Here are the links to the Ansible documentation for the ec2 and ec2_elb modules:
http://docs.ansible.com/ec2_module.html
http://docs.ansible.com/ec2_elb_module.html
Alternatively, if it is not possible to register the EC2 instance against the ELB post creation, I would settle for another 'play' that collects all EC2 instances with a certain name and loops through them, adding them to the ELB.
Here's what I do that works:
- name: Add machine to elb
local_action:
module: ec2_elb
aws_access_key: "{{lookup('env', 'AWS_ACCESS_KEY')}}"
aws_secret_key: "{{lookup('env', 'AWS_SECRET_KEY')}}"
region: "{{ansible_ec2_placement_region}}"
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{elb_name}}"
state: present
The biggest issue was the access and secret keys. The ec2_elb module doesn't seem to use the environment variables or read ~/.boto, so I had to pass them manually.
The ansible_ec2_* variables are available if you use the ec2_facts module. You can fill these parameters by yourself of course.
The below playbook should work for ec2 server creation and registering it to the elb. Make sure you have the variables set properly or you can also hard-code the variable values in playbook.
- name: Creating webserver
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: t1.micro
image: "{{ ami_id }}"
wait: yes
assign_public_ip: yes
group_id: ["{{ sg_webserver }}"]
vpc_subnet_id: "{{ PublicSubnet }}"
instance_tags: '{"Name": "webserver", "Environment": "Dev"}
register: webserver
- name: Adding Webserver to ELB
local_action:
module: ec2_elb
ec2_elbs: "{{ elb_name }}"
instance_id: "{{ item.id }}"
state: 'present'
region: "{{ region }}"
with_items: nat.instances