Ansible Tower custom environments - ansible-tower

I'm running Ansible Tower v3.8.6 on a RHEL8 server and I've defined a custom environment by following this
link. I've added this custom environment to under Settings - System - "Custom Virtual Environment Paths" and also made this custom environment the default for my organisation.
I've added the following to my playbook and it confirms that I'm using the "correct" version of ansible and python as defined in my custom virtual environment.`
- name: get ansible and python versions
shell: |
ansible --version
python -V
register: result
- name: display ansible and python versions
debug:
var: result.stdout
I setup this environment so I can interact with our Ovirt 4.5 environment. Despite the fact that I have the python ovirt sdk installed I keep getting this error.
"msg": "ovirtsdk4 version 4.4.0 or higher is required for this module"
I've googled and googled but none of the solutions work for me.
Is this a lost cause? Upgrading to Ansible Automation Platform is out of the question.
Any ideas how I can make this work?
#pwd
/var/lib/awx/venv/rhv-4.5
#source bin/activate
(rhv-4.5) #ansible --version
ansible [core 2.12.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/awx/venv/rhv-4.5/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/awx/vendor/inventory_collections:/opt/collections
executable location = /var/lib/awx/venv/rhv-4.5/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.2
libyaml = True
(rhv-4.5) #python -V
Python 3.8.12
(rhv-4.5) #pip list
Package Version
----------------------- ---------
ansible-core 2.12.6
certifi 2022.6.15
cffi 1.15.1
charset-normalizer 2.1.1
cryptography 37.0.4
idna 3.3
Jinja2 3.1.2
lxml 4.9.1
MarkupSafe 2.1.1
ntlm-auth 1.5.0
ovirt-engine-sdk-python 4.5.2
packaging 21.3
pip 22.2.2
psutil 5.9.1
pycparser 2.21
pycurl 7.45.1
pykerberos 1.2.4
pyparsing 3.0.9
pywinrm 0.4.3
PyYAML 6.0
requests 2.28.1
requests-ntlm 1.1.0
resolvelib 0.5.4
setuptools 65.3.0
six 1.16.0
urllib3 1.26.12
wheel 0.37.1
xmltodict 0.13.0
(rhv-4.5) #
EDIT
I wrote a small playbook to test ovirt_auth from within the venv.
---
- name: Test ovirt_auth
hosts: localhost
vars:
rhv1_url: "https://rhvm.server.local/ovirt-engine/api"
rhv1_username: "me#rhvm.local"
rhv1_passwd: "Super Secure Password"
tasks:
- name: Authenticate with RHV
ovirt.ovirt.ovirt_auth:
url: "{{ rhv1_url }}"
username: "{{ rhv1_username }}"
password: "{{ rhv1_passwd }}"
- name: debug ovirt_auth
debug:
var: ovirt_auth
This worked and the debug printed the expected output.
When I ran it through Ansible Tower, fails and "ovirtsdk4 version 4.4.0 or higher is required for this module" message is back
So it looks like Ansible Tower just isn't getting the memo...

So the solution was deceptively simple, and shout out to Kevin from Red Hat Support for the answer.
The workflow runs on the Ansible Tower server using an inventory called 'inv-localhost'. This inventory already had the "ansible_connection: local" but needed 'ansible_python_interpreter: "{{ansible_playbook_python}}"' as well.
Now it works!
In addition, I'd not followed custom environment documentation correctly. Should go without saying but read the doco closely...
Thanks

Related

Ansible aws_ec2 inventory plugin did not pass its verify_file() method

unfortunately our aws_ec2 inventory plugin does not work anymore and I cant figure it our why.
It worked the last days but after an update on the ansible VM it shows only the same error.
Error:
/opt/ansible/bin/ansible-inventory -i inventory/ec2.aws.yml --graph -vvvvvv
ansible-inventory 2.9.2
config file = /home/XXXXX/ansible/ansible.cfg
configured module search path = [u'/home/XXXXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/python2.7/site-packages/ansible
executable location = /opt/ansible/bin/ansible-inventory
python version = 2.7.5 (default, Jun 28 2022, 15:30:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
Using /home/XXXXX/ansible/ansible.cfg as config file
setting up inventory plugins
ansible_collections.amazon.aws.plugins.inventory.aws_ec2 declined parsing /home/XXXXX/ansible/XXXXX/ec2.aws.yml as it did not pass its verify_file() method
I already checked if boto3 and botocore are installed, and they are for the python2.7 version that Ansible uses:
python2.7 -m pip freeze
boto3==1.26.69
botocore==1.29.69
This is the inventory yaml file:
plugin: amazon.aws.aws_ec2
cache: yes
cache_timeout: 600
regions:
- eu-central-1
validate_certs: False
keyed_groups:
- prefix: os
key: tags['OS']
hostnames:
- tag:Name
compose:
ansible_host: private_ip_address
I am using this in the "/home/XXXXX/ansible/ansible.cfg":
[inventory]
enable_plugins = vmware_vm_inventory, amazon.aws.aws_ec2
Also the amazon.aws collection is installed:
/opt/ansible/bin/ansible-galaxy collection install amazon.aws
Process install dependency map
Starting collection install process
Skipping 'amazon.aws' as it is already installed
Also the credentials are exported as env vars:
env
AWS_ACCESS_KEY_ID=XXXXXXXX
AWS_SECRET_ACCESS_KEY=XXXXXXXXXX
AWS_SESSION_TOKEN=XXXXXXXXXX
Does anyone have an idea what's the issue?
I was trying to run a playbook and every time the same issue comes up.
ansible_collections.amazon.aws.plugins.inventory.aws_ec2 declined parsing /home/XX/ansible/XX/ec2.aws.yml as it did not pass its verify_file() method
ec2.aws.yml has never been a valid filename for use with the aws_ec2 inventory plugin.
Inventory files for this plugin must end in either aws_ec2.yml or aws_ec2.yaml.

Ansible Failed to parse inventory(gcp_compute plugin)

trying to ping-pong all instances in the group, using virtualenv where I have installed all the needed packages with a makefile. I cloned repository of my project, so I need to say that this is work on my colleagues' laptops(mac intel), but not on mine(mac m2).
using ansible all -i ./inventories/infra-dev/ -m ping
Getting this output
[WARNING]: * Failed to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev/infra_dev_gcp.yaml with auto plugin: Invalid control character at: line 5 column 38 (char 170)
[WARNING]: * Failed to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev/infra_dev_gcp.yaml with yaml plugin: Plugin configuration YAML file, not YAML inventory
[WARNING]: * Failed to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev/infra_dev_gcp.yaml with ini plugin: Invalid host pattern '---' supplied, '---' is normally a sign this is a YAML
file.
[WARNING]: Unable to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev/infra_dev_gcp.yaml as an inventory source
[WARNING]: Unable to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev as an inventory source
ERROR! No inventory was parsed, please check your configuration and options.
Versions:
Package Version
-------------- ---------
ansible 2.9.1
cachetools 4.2.4
certifi 2022.9.24
cffi 1.15.1
chardet 3.0.4
cryptography 38.0.3
google-auth 1.23.0
idna 2.10
Jinja2 3.1.2
MarkupSafe 2.1.1
pathspec 0.10.1
pip 22.3
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
PyYAML 6.0
requests 2.25.0
rsa 4.9
setuptools 65.5.0
six 1.16.0
urllib3 1.26.12
wheel 0.37.1
yamllint 1.23.0
And the inventory file:
plugin: gcp_compute
projects:
- <some>-<some>-<some>
keyed_groups:
- prefix: "gcp"
key: labels['group']
filters:
- labels.ecosystem = "dev" AND labels.ecolevel = "infra-dev"
auth_kind: serviceaccount
service_account_file: ~/.<some>/<some>-<some>-<some>.json
hostnames:
- name
compose:
ansible_host: networkInterfaces[0].networkIP
Maybe your service account file is wrong or does not exist:
service_account_file: ~/.<some>/<some>-<some>-<some>.json
You should use a valid service account file. You can get this file from:
https://console.cloud.google.com/apis/credentials?pli=1
Follow this simple steps: How can I get the file "service_account.json" for Google Translate API?
Then put in yml configuration
service_account_file: ~/service_account.json
or something similar

Ansible cannot find required boto3 module

I am having trouble to fix a seemingly simple issue. I am missing a plugin during execution of an ansible playbook from a git repository. It is trying to execute the Ansible ec2_group_info command from AWS boto3 plugin. The error is the following:
[WARNING]: Skipping plugin (/home/user/git-repo-name/plugins/filters/kms_filters.py) as it seems to
be invalid: No module named 'boto3'
failed: [localhost] (item=DEV) => {"ansible_loop_var": "item", "changed": false, "item":
"DEV", "msg": "boto3 required for this module"}
My ansible information using ansible --version inside of the repo folder locally looks like this:
ansible 2.9.6
config file = /home/user-name/repo-name/ansible.cfg
configured module search path = ['/home/user-name/repo-name/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
Outside of the repo folder locally it looks like this:
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user-name/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
Python 3 is installed as well as boto3 globally, and I can use boto3 properly using python3.
I have searched on many forum pages, but I could not find a satisfying solution to my issue ...
To me it seems like it does not search for all the possible installed plugin options globally but only restrained to the options of the repo...
It seems to be failing to find the module in this plugin kms_filters.py as well. The content of the file is the following:
import boto3
import base64
kms = boto3.client('kms', region_name='region-name')
def kms_decrypt(ciphertext):
return kms.decrypt(CiphertextBlob=base64.b64decode(ciphertext)).get('Plaintext')
def kms_encrypt(plaintext, key):
return base64.b64encode(kms.encrypt(KeyId=key,Plaintext=plaintext).get('CiphertextBlob'))
class FilterModule(object):
def filters(self):
return { 'kms_encrypt': kms_encrypt, 'kms_decrypt': kms_decrypt }
How would I need to configure it so that it can find the boto3 plugin? Where do I need to add any information that makes this possible??? If possible I would prefer if the plugin is available to be used within the repo configuration itself.
In this case, You might be having multiple python versions, My guess is your python3 soft link points to python3.6 .Please run ls -lrt python* in /usr/bin directory to identify the python3 version. It is likely that you installed boto3 for a different python version.
Suggest try running installation of boto3 using ansible pre_tasks. That way your boto3 will always be present.

How to install Ansible in Amazon Linux EC2?

I pick this Amazon Linux: Amazon Linux AMI 2017.09.1 (HVM), SSD Volume Type - ami-1a033c7a.
I installed Ansible using the command:
sudo pip install ansible,
it shows install completes.when I run ansible --version, it shows:
ansible 2.4.1.0
config file = None
configured module search path = [u'/home/ec2-
user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-
packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Nov 2 2017, 19:20:38) [GCC 4.8.5
20150623 (Red Hat 4.8.5-11)]
Why config file = None? Shouldn't it shows /etc/ansible/ansible.cfg? I do not see /etc/ansible/hosts, not even folder /etc/ansible. Did I install correctly, where is the folder /etc/ansible?
why config file = None?
Because at the time of running ansible --version no config file was found.
shouldn't it shows /etc/ansible/ansible.cfg?
No. It should show the ansible.cfg actually being used.
Per documentation, Ansible tries to find the config file in:
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
ansible --version will show you the exact path to the one being used.
Strictly speaking the last point is not always true, as package managers and virtual environment managers might cause the /etc directory to be located elsewhere.
did I install correctly
You didn't mention any error or warning during the installation and ansible --version returned a proper response.
There is no reason not to believe it's installed properly.
where is the folder /etc/ansible?
It's not existing on your system. There is no default inventory file, nor configuration file created by the installation package.
Create one.
Here I answer the question myself.
There are many ways to install ansible, and then you get difference default settings, depending on the OS. Many tutorials just assume the ansible_hosts and ansible.cfg already in /etc/ansible, which is not correct if you install ansible using pip.
In fact, if you install ansible using pip, then you will not see ansible.cfg and ansible_hosts in /etc/ansible. Even the folder /etc/ansible does not exist. but never mind, you can create these two files yourself as follows:
suppose you want to store ansible_hosts and ansible.cfg in /home/ec2-user, then you can:
echo <remote_host> /home/ec2-user/ansible_hosts
export ANSIBLE_INVENTORY=/home/ec2-user/ansible_hosts
wget https://raw.githubusercontent.com/ansible/ansible/devel/examples/ansible.cfg
mv ansible.cfg /home/ec2-user/
export ANSIBLE_CONFIG=/home/ec2-user/ansible.cfg
then if ansible --version, you will see
ansible 2.4.1.0
config file = /home/ec2-user/ansible.cfg
....
and if you test ansible ad-hoc command (my remote_host is ubuntu, so I use -u ubuntu, you can change it to be yours):
ansible all -m ping -u ubuntu
then you see ansible ping remote_host successfully.
This shows ansible does work.

AWS elastic beanstalk git deployment suddenly failing due to composer issue despite no changes to composer.json

I have a number of environments running in AWS Elastic Beanstalk. I deploy direct from git using git aws.push.
I use composer.json to install required php sdk's. I've not changed this file for a long time but it's suddenly started failing in all environments.
Output from the AWS logs is
+ echo 'Found composer.json file. Attempting to install vendors.'
Found composer.json file. Attempting to install vendors.
+ composer.phar install --no-ansi --no-interaction
Loading composer repositories with package information
Installing dependencies
[RuntimeException]
Could not load package aws/aws-sdk-php in http://packagist.org: [UnexpectedValueException] Could not parse version constraint ^5.3: Invalid version string "^5.3"
[UnexpectedValueException]
Could not parse version constraint ^5.3: Invalid version string "^5.3"
install [--prefer-source] [--prefer-dist] [--dry-run] [--dev] [--no-dev] [--no-custom-installers] [--no-scripts] [--no-progress] [-v|vv|vvv|--verbose] [-o|--optimize-autoloader]
2015-05-28 09:57:18,414 [ERROR] (15056 MainThread) [directoryHooksExecutor.py-33] [root directoryHooksExecutor error] Script /opt/elasticbeanstalk/hooks/appdeploy/pre/10_composer_install.sh failed with returncode 1
my composer.json is:
{
"require": {
"aws/aws-sdk-php": "2.7.*",
"monolog/monolog": "1.0.*",
"facebook/php-sdk-v4" : "4.0.*",
"ext-curl": "*",
"paypal/sdk-core-php": "v1.4.2",
"paypal/permissions-sdk-php":"v2.5.106",
"paypal/adaptivepayments-sdk-php":"2.*"
}
}
I notice it does want the aws-sdk-php but the version is not 5.3 (which is mentioned in the logs).
5.3 makes me think php version, checking php -v i get
php -v
PHP 5.5.12 (cli) (built: May 20 2014 22:27:36)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
with Zend OPcache v7.0.4-dev, Copyright (c) 1999-2014, by Zend Technologies
I've tried re-installing older versions that have previously installed fine and they also fail with the same error. This has to be due to the environment. Does anyone know if there have been changes recently.
Create a folder in your root of the project called .ebextensions. Then create a new file in there called 01-composer-install.config with the following content.
commands:
01_update_composer:
command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: COMPOSER_HOME
value: /root
I just had to update composer using the instructions here:
https://getcomposer.org/download/