Ansible replace playbook - replace

I have recentry started learning ansible and now i have one qiestion:
here is my playbook:
---
- name: execute command with sudo
hosts: all
user: root
become: yes
tasks:
- name: executing
command: sed -i "s/print_thresholds($name,undef,undef,92,98);/#print_thresholds($name,undef,undef,92,98);/g" /etc/munin/plugins/df
it was worked fine, sed replaced me all what i need, but i got this alert.
[WARNING]: Consider using the replace, lineinfile or template module
rather than running sed. If you need to use command because replace,
lineinfile or template is insufficient you can add warn=False to this
command task or set command_warnings=False in ansible.cfg to get rid
of this message.
so, can you tell me please correct playbook?

You should first of all read about the lineinfile module as the warning suggest.
Then, although your code and the use of sed is correct, it is not the safest way to use Ansible to edit a file, mainly because it is not idempotent and as error-resistant as using an Ansible module.
This code produces the same (in some cases better) result as the sed command but using the lineinfile module:
- name: executing
lineinfile:
path: /etc/munin/plugins/df
regexp: 'print_thresholds($name,undef,undef,92,98)'
line: '^#print_thresholds($name,undef,undef,92,98)'

Related

Execute a BigQuery query in Cloud Build step

I'm using Cloud Build with the gcloud builder. I override the entrypoint to be bq so I can run some BigQuery SQL in my build step. Previously, I had the SQL embedded directly in the YAML config for Cloud Build. This works fine:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bq'
args: ['query', '--use_legacy_sql=false', 'SELECT 1']
Now I'd like to refactor the SQL out of the YAML and into a file instead. According to here, you can cat the file or pipe it to bq. This works on the command line without any problems.
But, I can't get it to work with Cloud Build. I've tried lots of different combinations, and escaping chars etc. but no matter what I try the shell doesn't evaluate/execute the cat my_query.sl backticks, and instead thinks that it's the query itself:
Works fine:
Build in Cloud Build it won't work:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bq'
args: ['query', '--use_legacy_sql=false', '`cat my_query.sql`']
I also tried piping it instead of using cat, but I get the same error.
I must be missing something obvious here, but I can't see it. I could build a custom docker image, and wrap everything in a shell script, but I'd rather not have to do that if possible.
How do you use Cloud Build with shell evaluation inside a build step?
You can create a custom Bash script, e.g.:
#!/bin/bash
if [ $# -eq 0 ]; then
echo "No arguments supplied"
fi
bq query --use_legacy_sql=false < $1
Name this run_query.sh, then define your steps as:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', 'my_query.sql']
Disclaimer: this is based on reading the docs, but I haven't actually used Cloud Build.
I have done this:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
dir: 'my/directory'
args: ['-c', 'bq --project_id=my-project-name query --use_legacy_sql=false < ./my_query.sql']
Which works with gcloud builds submit ... and eliminates one file if you prefer.

Is it possible to run an Ansible playbook from a Chef AWS/Opworks cookbook?

I try to figure out if it's possible to create a Chef cookbook that ssh into an Ansible server and run some Ansible cookbook from AWS Opworks on the current node
I think of a script that I can put in a execute like this :
define :foobar_magento2_deploy do
release_path = node[:app_release_path]
execute 'Ansible playbook' do
command "ssh -i key ansible-server 'ansible-playbook arg1 arg2'"
end
end
Do you think it's possible ? Is there some caveats ? Hints ?
Edit from #coderanger answer:
define :foobar_magento2_deploy do
release_path = node[:app_release_path]
execute 'Ansible playbook' do
command "git clone ansible-playbook"
command "cd ansible-playbook"
command "ansible-playbook -l localhost playbook.yml"
end
end
So a couple of things:
OpsWorks Stacks is dangerously out of date and using it should be considered highly suspect.
I don't actually recognize that define block thing in there, maybe that's an older OpsWorks syntax?
You can definitely run an Ansible playbook from Chef code, but I would probably go a little simpler than you have there. Probably just run ansible-playbook locally and aim it at localhost.

Ansible Keepass integration via python script

i am very new to ansible and would like to test a few things.
I have a couple of Amazon EC2 instances and would like to install different software components on them. I don't want to have the (plaintext) credentials of the technical users inside of ansible scripts or config files. I know that it is possible to encrypt those files, but I want to try keepass for a central password management tool. So my installation scripts should read the credentials from a .kdbx (Keepass 2) database file before starting the actual installation.
Till now i wrote a basic python script for reading the .kdbx file. The script outputs a json object via:
print json.dumps(inventory, sort_keys=False)
The ouput looks like the following:
{"cdc":
{"cdc_test_server":
{"cdc_test_user":
{"username": "cdc_test_user",
"password": "password"}
}
}
}
Now I want to achieve, that the python script is executed by ansible and the key value pairs of the output are included/registered as ansible variables. So far my playbook looks as follows:
- hosts: 127.0.0.1
connection: local
tasks:
- name: "Test Playboook Functionality"
command: python /usr/local/test.py
register: pass
- debug: var=pass.stdout
- name: "Include json user output"
set_fact: passwords="{{pass.stdout | from_json}}"
- debug: " {{passwords.cdc.cdc_test_server.cdc_test_user.password}} "
The first debug generates the correct json output, but i am not able to include the variables in ansible, so that I can use them via jinja2 notation. set_fact doesn't throw an exception, but the last debug just returns a "Hello world" - message? So my question is: How do I properly include the json key value pairs as ansible variables via task?
See Ansible KeePass Lookup Plugin
ansible_user : "{{ lookup('keepass', 'path/to/entry', 'username') }}"
ansible_become_pass: "{{ lookup('keepass', 'path/to/entry', 'password') }}"
You may want to use facts.d and place your python script there to be available as a fact.
Or write a simple action plugin that returns json object to eliminate the need in stdout->from_json conversion.
Late to the party, but it seems your use case is primarily covered by keepass-inventory. And it doesn't require any playbook "magic". Disclaimer: I contribute to this non-profit.
export KDB_PATH=example.kdbx
export KDB_PASS=example
ansible all --list-hosts -i keepass-inventory.py

Ansible - On error, exit role and run cleanup

I'm trying to spin up an AWS deployment environment in Ansible, and I want to make it so that if something fails along the way, Ansible tears down everything on AWS that has been spun up so far. I can't figure out how to get Ansible to throw an error within the role
For example:
<main.yml>
- hosts: localhost
connection: local
roles:
- make_ec2_role
- make_rds_role
- make_s3_role
2. Then I want it to run some code based on that error here.
<make_rds_role>
- name: "Make it"
- rds:
params: etc <-- 1. Let's say it fails in the middle here
I've tried:
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
As well as other things on within the documentation, but what I really want is just a way to use something like the "block" and "rescue" commands , but as far as I can tell that only works within the same book and on plays, not roles. Does anyone have a good way to do this?
Wrap tasks inside your roles into block/rescue thing.
Make sure that rescue block has at least one task – this way Ansible will not mark the host as failed.
Like this:
- block:
- name: task 1
... # something bad may happen here
- name: task N
rescue:
- assert: # we need a dummy task here to prevent our host from being failed
that: ansible_failed_task is defined
Recent versions of Ansible register ansible_failed_task and ansible_failed_result when hit rescue block.
So you can do some post_tasks in your main.yml playbook like this:
post_tasks:
- debug:
msg: "Failed task: {{ ansible_failed_task }}, failed result: {{ ansible_failed_result }}"
when: ansible_failed_task is defined
But be warned that this trick will NOT prevent other roles from executing.
So in your example if make_rds_role fails ansible will apply make_s3_role and run your post_tasks afterwards.
If you need to prevent it, add some checking for ansible_failed_task fact in the beginning of each role or something.

Trouble with Ansible ec2 dynamic inventory

I'm using ansible to configure and deploy several servers in ec2. Since these servers are frequently changing I'd like to use dynamic inventory. I have set up ec2.py and ec2.ini in my jenkins server (this is where the ansible scripts are run) but am running into an issue when I run the playbook:
ERROR! Specified --limit does not match any hosts
Which clearly means that my hosts are not being selected correctly. When I run:
./ec2.py --list >> aws_example.json
everything looks good in aws_example.json.
I'm trying to select servers based on two tags, Name and environment. For example, I have a server with a 'Name' tag of 'api' and an 'environment' tag of 'production'.
I've set up the destination_format_tags like so:
destination_format_tags = Name,environment
and run ansible as follows:
ansible-playbook site.yml -i ec2.py -l api
I've also tried changing the hostname_variable:
hostname_variable = tag_Name.tag_environment
and running the command like so:
ansible-playbook site.yml -i ec2.py -l api.production
Additionally, I've also tried using only one tag with the hostname_variable:
hostname_variable = tag_Name
and running the command like so:
ansible-playbook site.yml -i ec2.py -l api
None of these configurations work. I'm also unable to find much documentation about these setting so I'm not sure how to correctly configure it. Can anyone point me in the right direction?
So the problem was how I was representing my host names in my playbook. Setting the hostname variable was the right thing to do:
hostname_variable = tag_Name
And here's how to represent it in the playbook:
- name: configure and deploy api servers
hosts: tag_Name_api
remote_user: ec2-user
sudo: true
roles:
- java
- nginx
- api
Additionally, it'll need to be called like so:
ansible-playbook site.yml -i ec2.py -l tag_Name_api
Make sure to change special characters such as . or - to _.