trying to ping-pong all instances in the group, using virtualenv where I have installed all the needed packages with a makefile. I cloned repository of my project, so I need to say that this is work on my colleagues' laptops(mac intel), but not on mine(mac m2).
using ansible all -i ./inventories/infra-dev/ -m ping
Getting this output
[WARNING]: * Failed to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev/infra_dev_gcp.yaml with auto plugin: Invalid control character at: line 5 column 38 (char 170)
[WARNING]: * Failed to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev/infra_dev_gcp.yaml with yaml plugin: Plugin configuration YAML file, not YAML inventory
[WARNING]: * Failed to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev/infra_dev_gcp.yaml with ini plugin: Invalid host pattern '---' supplied, '---' is normally a sign this is a YAML
file.
[WARNING]: Unable to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev/infra_dev_gcp.yaml as an inventory source
[WARNING]: Unable to parse /<some>/<some>/<some>/<some>-ansible/inventories/infra-dev as an inventory source
ERROR! No inventory was parsed, please check your configuration and options.
Versions:
Package Version
-------------- ---------
ansible 2.9.1
cachetools 4.2.4
certifi 2022.9.24
cffi 1.15.1
chardet 3.0.4
cryptography 38.0.3
google-auth 1.23.0
idna 2.10
Jinja2 3.1.2
MarkupSafe 2.1.1
pathspec 0.10.1
pip 22.3
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
PyYAML 6.0
requests 2.25.0
rsa 4.9
setuptools 65.5.0
six 1.16.0
urllib3 1.26.12
wheel 0.37.1
yamllint 1.23.0
And the inventory file:
plugin: gcp_compute
projects:
- <some>-<some>-<some>
keyed_groups:
- prefix: "gcp"
key: labels['group']
filters:
- labels.ecosystem = "dev" AND labels.ecolevel = "infra-dev"
auth_kind: serviceaccount
service_account_file: ~/.<some>/<some>-<some>-<some>.json
hostnames:
- name
compose:
ansible_host: networkInterfaces[0].networkIP
Maybe your service account file is wrong or does not exist:
service_account_file: ~/.<some>/<some>-<some>-<some>.json
You should use a valid service account file. You can get this file from:
https://console.cloud.google.com/apis/credentials?pli=1
Follow this simple steps: How can I get the file "service_account.json" for Google Translate API?
Then put in yml configuration
service_account_file: ~/service_account.json
or something similar
Related
unfortunately our aws_ec2 inventory plugin does not work anymore and I cant figure it our why.
It worked the last days but after an update on the ansible VM it shows only the same error.
Error:
/opt/ansible/bin/ansible-inventory -i inventory/ec2.aws.yml --graph -vvvvvv
ansible-inventory 2.9.2
config file = /home/XXXXX/ansible/ansible.cfg
configured module search path = [u'/home/XXXXX/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/python2.7/site-packages/ansible
executable location = /opt/ansible/bin/ansible-inventory
python version = 2.7.5 (default, Jun 28 2022, 15:30:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
Using /home/XXXXX/ansible/ansible.cfg as config file
setting up inventory plugins
ansible_collections.amazon.aws.plugins.inventory.aws_ec2 declined parsing /home/XXXXX/ansible/XXXXX/ec2.aws.yml as it did not pass its verify_file() method
I already checked if boto3 and botocore are installed, and they are for the python2.7 version that Ansible uses:
python2.7 -m pip freeze
boto3==1.26.69
botocore==1.29.69
This is the inventory yaml file:
plugin: amazon.aws.aws_ec2
cache: yes
cache_timeout: 600
regions:
- eu-central-1
validate_certs: False
keyed_groups:
- prefix: os
key: tags['OS']
hostnames:
- tag:Name
compose:
ansible_host: private_ip_address
I am using this in the "/home/XXXXX/ansible/ansible.cfg":
[inventory]
enable_plugins = vmware_vm_inventory, amazon.aws.aws_ec2
Also the amazon.aws collection is installed:
/opt/ansible/bin/ansible-galaxy collection install amazon.aws
Process install dependency map
Starting collection install process
Skipping 'amazon.aws' as it is already installed
Also the credentials are exported as env vars:
env
AWS_ACCESS_KEY_ID=XXXXXXXX
AWS_SECRET_ACCESS_KEY=XXXXXXXXXX
AWS_SESSION_TOKEN=XXXXXXXXXX
Does anyone have an idea what's the issue?
I was trying to run a playbook and every time the same issue comes up.
ansible_collections.amazon.aws.plugins.inventory.aws_ec2 declined parsing /home/XX/ansible/XX/ec2.aws.yml as it did not pass its verify_file() method
ec2.aws.yml has never been a valid filename for use with the aws_ec2 inventory plugin.
Inventory files for this plugin must end in either aws_ec2.yml or aws_ec2.yaml.
I'm running Ansible Tower v3.8.6 on a RHEL8 server and I've defined a custom environment by following this
link. I've added this custom environment to under Settings - System - "Custom Virtual Environment Paths" and also made this custom environment the default for my organisation.
I've added the following to my playbook and it confirms that I'm using the "correct" version of ansible and python as defined in my custom virtual environment.`
- name: get ansible and python versions
shell: |
ansible --version
python -V
register: result
- name: display ansible and python versions
debug:
var: result.stdout
I setup this environment so I can interact with our Ovirt 4.5 environment. Despite the fact that I have the python ovirt sdk installed I keep getting this error.
"msg": "ovirtsdk4 version 4.4.0 or higher is required for this module"
I've googled and googled but none of the solutions work for me.
Is this a lost cause? Upgrading to Ansible Automation Platform is out of the question.
Any ideas how I can make this work?
#pwd
/var/lib/awx/venv/rhv-4.5
#source bin/activate
(rhv-4.5) #ansible --version
ansible [core 2.12.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/awx/venv/rhv-4.5/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/awx/vendor/inventory_collections:/opt/collections
executable location = /var/lib/awx/venv/rhv-4.5/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.2
libyaml = True
(rhv-4.5) #python -V
Python 3.8.12
(rhv-4.5) #pip list
Package Version
----------------------- ---------
ansible-core 2.12.6
certifi 2022.6.15
cffi 1.15.1
charset-normalizer 2.1.1
cryptography 37.0.4
idna 3.3
Jinja2 3.1.2
lxml 4.9.1
MarkupSafe 2.1.1
ntlm-auth 1.5.0
ovirt-engine-sdk-python 4.5.2
packaging 21.3
pip 22.2.2
psutil 5.9.1
pycparser 2.21
pycurl 7.45.1
pykerberos 1.2.4
pyparsing 3.0.9
pywinrm 0.4.3
PyYAML 6.0
requests 2.28.1
requests-ntlm 1.1.0
resolvelib 0.5.4
setuptools 65.3.0
six 1.16.0
urllib3 1.26.12
wheel 0.37.1
xmltodict 0.13.0
(rhv-4.5) #
EDIT
I wrote a small playbook to test ovirt_auth from within the venv.
---
- name: Test ovirt_auth
hosts: localhost
vars:
rhv1_url: "https://rhvm.server.local/ovirt-engine/api"
rhv1_username: "me#rhvm.local"
rhv1_passwd: "Super Secure Password"
tasks:
- name: Authenticate with RHV
ovirt.ovirt.ovirt_auth:
url: "{{ rhv1_url }}"
username: "{{ rhv1_username }}"
password: "{{ rhv1_passwd }}"
- name: debug ovirt_auth
debug:
var: ovirt_auth
This worked and the debug printed the expected output.
When I ran it through Ansible Tower, fails and "ovirtsdk4 version 4.4.0 or higher is required for this module" message is back
So it looks like Ansible Tower just isn't getting the memo...
So the solution was deceptively simple, and shout out to Kevin from Red Hat Support for the answer.
The workflow runs on the Ansible Tower server using an inventory called 'inv-localhost'. This inventory already had the "ansible_connection: local" but needed 'ansible_python_interpreter: "{{ansible_playbook_python}}"' as well.
Now it works!
In addition, I'd not followed custom environment documentation correctly. Should go without saying but read the doco closely...
Thanks
I'm trying to deploy simple django web ap to Azure App Service using CI/CD pipeline (the most basic one that is offered by Microsoft for app deployment- no changes from me). However I'm getting the following error:
2021-03-08T16:55:51.172914117Z File "", line 219, in _call_with_frames_removed
2021-03-08T16:55:51.172918317Z File "/home/site/wwwroot/deytabank_auth/wsgi.py", line 13, in
2021-03-08T16:55:51.172923117Z from django.core.wsgi import get_wsgi_application
2021-03-08T16:55:51.172927017Z ModuleNotFoundError: No module named 'django'
I checked other threads and tried doing all the things mentioned but it did not help, or I am missing something:
In wsgi.py I added:
import os
import sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/..' )
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/../licenses_api')
sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/../deytabank_auth')
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'deytabank_auth.settings')
application = get_wsgi_application()
But still getting the same error, where django is not recognized. I can see that reuqirements.txt is being installed successfully and it has all the neccessary libraries there (including Django)
My CI/CD yaml file looks like this:
# Python to Linux Web App on Azure
# Build your Python project and deploy it to Azure as a Linux Web App.
# Change python version to one thats appropriate for your application.
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- develop
variables:
# Azure Resource Manager connection created during pipeline creation
azureServiceConnectionId: '***'
# Web app name
webAppName: 'DeytabankAuth'
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Environment name
environmentName: 'DeytabankAuth'
# Project root folder. Point to the folder containing manage.py file.
projectRoot: $(System.DefaultWorkingDirectory)
# Python version: 3.7
pythonVersion: '3.7'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: BuildJob
pool:
vmImage: $(vmImageName)
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
python -m pip install --upgrade pip
pip install setup
pip install -r requirements.txt
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(projectRoot)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
- stage: Deploy
displayName: 'Deploy Web App'
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'
- task: AzureWebApp#1
displayName: 'Deploy Azure Web App : DeytabankAuth'
inputs:
azureSubscription: $(azureServiceConnectionId)
appName: $(webAppName)
package: $(Pipeline.Workspace)/drop/$(Build.BuildId).zip
Maybe I need to configure something in the Azure App Service? But i am not sure exactly what.
I have met this issue before, and the problem might be your deployment method. Not sure which one you use, but the classic deployment center below is being deprecated, try use the new deployment center.
Checked your workflow with the one worked on my side, there is nothing different. So I will post the correct step worked on my side for you to refer.
check your project locally to make sure it could run successfully.
Create a new web app (this is to make sure no damage on your web app) and navigate to the Deployment center page.
Go to your GitHub and navigate to GitHub Action page to see the log.
Test your web app and check the file structure on KuDu site: https://{yourappname}.scm.azurewebsites.net/wwwroot/
You could test by click the browse button like what I did.
If you want to run command, go to this site: https://{yourappname}.scm.azurewebsites.net/DebugConsole
By the way, I post this link if you need deploy using DevOps.
The possible reason for this question is that you don't have Django installed.
In the Microsoft-hosted agent ubuntu-latest, Django is not pre-installed. That is, you need to install it manually.
pip install Django==3.1.7
Click this document for detailed information about downloading Django.
Apache Beam Python SDK upgrade to 2.11.0 issue .
I am upgrading the sdk from 2.4.0 to 2.11.0 using requirements.txt. It has dependencies as below:
apache_beam==2.11.0
google-cloud-dataflow==2.4.0
httplib2==0.11.3
google-cloud==0.27.0
google-cloud-storage==1.3.0
workflow
For managing the dependencies in beam pipeline we have this txt file. There are two vm instance on google compute engine , one is master other is worker. These instances will install all packages listed in the requirements.txt file.
The jobs are run through DataflowRunner. If running the code manually using command as
python code.py --project --setupFilePath --requirementFilePath --workerMachineType n1-standard-8 --runner DataflowRunner.
The job is not upgrading the version to 2.11.0 , rather it fails .Error Message in stackdriver logs:
2019-03-26 19:02:02.000 IST
Failed to install packages: failed to install requirements: exit status 1
Expand all | Collapse all {
insertId: "27857323862365974846:1225647:0:438995"
jsonPayload: {
line: "boot.go:144"
message: "Failed to install packages: failed to install requirements: exit status 1"
}
labels: {
compute.googleapis.com/resource_id: "278567544395974846"
compute.googleapis.com/resource_name: "icf-20190334132038-03260625-b9fa-harness-gtml"
compute.googleapis.com/resource_type: "instance"
dataflow.googleapis.com/job_id: "2019-03-26_06_25_16-6068768320191854196"
dataflow.googleapis.com/job_name: "icf-20190326132038"
dataflow.googleapis.com/region: "global"
}
logName: "projects/project-id/logs/dataflow.googleapis.com%2Fworker-startup"
receiveTimestamp: "2019-03-26T13:32:07.627920858Z"
resource: {
labels: {
job_id: "2019-03-26_06_25_16-6068768320191854196"
job_name: "icf-20190326132038"
project_id: "project-id"
region: "global"
step_id: ""
}
type: "dataflow_step"
}
severity: "CRITICAL"
timestamp: "2019-03-26T13:32:02Z"
}
Note : When running the pip install apache-beam==2.11.0 on both worker and master , the code runs.*
I am not sure but most likely the issue here without seeing the rest of the logs. Is an incompatible dependency. Are you able to run the pipeline locally and see if you have any dep issues?
I have a number of environments running in AWS Elastic Beanstalk. I deploy direct from git using git aws.push.
I use composer.json to install required php sdk's. I've not changed this file for a long time but it's suddenly started failing in all environments.
Output from the AWS logs is
+ echo 'Found composer.json file. Attempting to install vendors.'
Found composer.json file. Attempting to install vendors.
+ composer.phar install --no-ansi --no-interaction
Loading composer repositories with package information
Installing dependencies
[RuntimeException]
Could not load package aws/aws-sdk-php in http://packagist.org: [UnexpectedValueException] Could not parse version constraint ^5.3: Invalid version string "^5.3"
[UnexpectedValueException]
Could not parse version constraint ^5.3: Invalid version string "^5.3"
install [--prefer-source] [--prefer-dist] [--dry-run] [--dev] [--no-dev] [--no-custom-installers] [--no-scripts] [--no-progress] [-v|vv|vvv|--verbose] [-o|--optimize-autoloader]
2015-05-28 09:57:18,414 [ERROR] (15056 MainThread) [directoryHooksExecutor.py-33] [root directoryHooksExecutor error] Script /opt/elasticbeanstalk/hooks/appdeploy/pre/10_composer_install.sh failed with returncode 1
my composer.json is:
{
"require": {
"aws/aws-sdk-php": "2.7.*",
"monolog/monolog": "1.0.*",
"facebook/php-sdk-v4" : "4.0.*",
"ext-curl": "*",
"paypal/sdk-core-php": "v1.4.2",
"paypal/permissions-sdk-php":"v2.5.106",
"paypal/adaptivepayments-sdk-php":"2.*"
}
}
I notice it does want the aws-sdk-php but the version is not 5.3 (which is mentioned in the logs).
5.3 makes me think php version, checking php -v i get
php -v
PHP 5.5.12 (cli) (built: May 20 2014 22:27:36)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
with Zend OPcache v7.0.4-dev, Copyright (c) 1999-2014, by Zend Technologies
I've tried re-installing older versions that have previously installed fine and they also fail with the same error. This has to be due to the environment. Does anyone know if there have been changes recently.
Create a folder in your root of the project called .ebextensions. Then create a new file in there called 01-composer-install.config with the following content.
commands:
01_update_composer:
command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: COMPOSER_HOME
value: /root
I just had to update composer using the instructions here:
https://getcomposer.org/download/