Ansible Pull mulptiple repo - if-statement

I want to pull multiple repositories via ansible playbook but with if condition matches,
tasks:
- name: pull from git abc/123
git:
repo: git#gitlab.com:xyz.git
dest: var/www/abc/123
update: yes
version: $sprint_name
tasks:
- name: pull from git abc/234
git:
repo: git#gitlab.com:xyz.git
dest: /var/www/234
update: yes
version: $sprint_name
Now here I want to pass "123" or "234" as variable and if user want to pull only "123" or only "234" user should be able to do it

if you want the user to make choices during playbook runtime, by typing in some information that will alter the playbook's execution, you can use the vars_prompt section.
You will get his response to a variable and with a when section in your tasks you can control which tasks to run.
documentation here

Related

How can I build a github action by using a public docker registry container?

I am trying to create a simple docker action but instead of using the local Dockerfile in the repository I want to use a public on dockehub. I have created two repositories:
docker-creator: which has the docker action
docker-user: which is a job that uses docker-creator
This is my action.yml in docker-creator:
name: 'Hello World'
description: 'Greet someone and record the time'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: 'World'
outputs:
time: # id of output
description: 'The time we greeted you'
runs:
using: 'docker'
image: 'docker://alpine:3.10'
args:
- ${{ inputs.who-to-greet }}
In the image param typically you will see 'Dockerfile' but in my example you see I have replaced it with a public docker registry container.
When I use docker-creator from docker-user I get this errordocker-creator error
This is the main.yml workflow in docker-user:
hello_world_job:
runs-on: ubuntu-latest
name: A job to say hello
steps:
- name: Hello world action step
id: hello
uses: bryanaexp/docker-creator#main
with:
who-to-greet: 'Cat'
# Use the output from the `hello` step
- name: Get the output time
run: echo "The time was ${{ steps.hello.outputs.time }}"
I have tried looking for help online but alot of it is only using dockerfile. Also I have referred to this documentation https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#example-using-public-docker-registry-container but no use.
LINKS:
Docker Action : https://github.com/bryanaexp/docker-creator/blob/main/action.yml
Github Action Using Docker Action: https://github.com/bryanaexp/docker-user
These are the following steps I have taken to resolve my problem:
1 - I have read through a lot of github action documentation
https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#example-using-public-docker-registry-container
2 - Done various tests on my code using different parameters
3 - Researched stackoverflow but only found a similar example but that uses a private docker registry.

Azure Pipeline YAML templates - Is there any way to check and confirm if a pipeline covers all the required jobs while using a template?

We are using yaml pipelines in Azure Devops along with templates. The requirement is to identify if all the pipeline that uses the template are running a required set of steps or not? Is there any way to confirm this, other than manual monitoring.
It would have been useful if conditional checks can be added so that we can check if a specific task is present or not.
To explain as an example, let's say that a template has 4 tasks for running 4 different types of tests. Multiple pipelines are created using this template. They can opt to run these tests by turning it ON (set Yes/No value in input parameter). We need to check and verify if all the pipelines are running all these 4 tests.
There is no easy way to do this, as we don't have an option to get details about used teplates via REST API, or any other way. What you can do is try to template this a bit. So first you need to create a template like:
paramaters:
- name: repositoryName
type: string
default: 'self'
- name: pipelinePath
type: string
default: '' # if your pipeline file has always the same name you can put it here and do not set calling pipeline
jobs:
- job: '${{ parameters.repositoryName }}' # please easnure that you repository name is a valid job name
dependsOn: []
steps:
- checkout: ${{ parameters.repositoryName }}
- pwsh: |
Here you should use yq tool to extract steps which uses your template
Then manipulate it to be sure that it checks you conditions
And then send some alerts to slack or wahtever you want
Then you can call it from the pipeline like:
parameters:
- name: repositories
type: object
default:
- RepoA
- RepoB
- list all your repos here
resources:
repositories:
- repository: RepoA
type: git
name: RepoA
- repository: RepoB
type: git
name: RepoB
- list the same repos here to be sure you have permission to use them in the pipeline
jobs:
# I assumed that `pipelinePath` is always the same and set as default an thus we can use each expression. If not you will have to call it one by one
- ${{ each repository in parameters.repositoryName }}:
- template: template.yml
paramaters:
repositoryName: ${{ repository }}
Here is the potential solution. I can't provide full but I think that idea is clear. I suggested to use yq tool but you can use whatever you want what help you to verify that templates is used as expected. Also, you can setup cron job to have it run on schedule.

How to set the environment variable in cloudbuild.yaml file?

I am trying to set GOOGLE_APPLICATION_CREDENTIALS. Is this correct way to set environment variable ? Below is my yaml file:
steps:
- name: 'node:10.10.0'
id: installing_npm
args: ['npm', 'install']
dir: 'API/system_performance'
- name: 'node:10.10.0'
#entrypoint: bash
args: ['bash', 'set GOOGLE_APPLICATION_CREDENTIALS=test/emc-ema-cp-d-267406-a2af305d16e2.json']
id: run_test_coverage
args: ['npm', 'run', 'coverage']
dir: 'API/system_performance'
Please help me solve this.
You can use the env step parameter
However, when you execute Cloud Build, the platform uses its own service account (in the future, it will be possible to specify the service account that you want to use)
Thus, if you grant the Cloud Build service account with the correct role, you don't need to use a key file (which is committed in your Git repository, not a really good practice!)

Cannot run Azure CLI task on yaml build

I'm starting to lose my sanity over a yaml build. This is the very first yaml build I've ever tried to configure, so it's likely I'm doing some basic mistake.
This is my yaml build definition:
name: ops-tools-delete-failed-containers-$(Date:yyyyMMdd)$(Rev:.rrrr)
trigger:
branches:
include:
- master
- features/120414-delete-failed-container-instances
schedules:
- cron: '20,50 * * * *'
displayName: At minutes 20 and 50
branches:
include:
- features/120414-delete-failed-container-instances
always: 'true'
pool:
name: Standard-Windows
variables:
- name: ResourceGroup
value: myResourceGroup
stages:
- stage: Delete
displayName: Delete containers
jobs:
- job: Job1
steps:
- task: AzureCLI#2
inputs:
azureSubscription: 'CPA (Infrastructure) (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx)'
scriptType: 'pscore'
scriptLocation: 'scriptPath'
scriptPath: 'General/Automation/ACI/Delete-FailedContainerInstances.ps1'
arguments: '-ResourceGroup $(ResourceGroup)'
So in short, I want to run a script using an Azure CLI task. When I queue a new build it stays like this forever:
I've tried running the same task with an inline script without success. The same thing happens if I try to run a Powershell task instead of an Azure CLI task.
What am I missing here?
TL;DR issue was caused by (lack of) permissions.
More details
After enabling the following feature I could see more details about the problem:
The following warning was shown after enabling the feature:
Clicking on View shows the Azure subscription used in the Azure CLI task. After clicking on Permit, everything works as expected.
Cannot run Azure CLI task on yaml build
Your YAML file should be correct. I have test your YAML in my side, it works fine.
The only place I modified is change the agent pool with my private agent:
pool:
name: MyPrivateAgent
Besides, according to the state in your image:
So, It seems your private agent under the agent queue which you specified for the build definition is not running:
Make the agent running, then the build will start.
As test, you could use the hosted agent instead of your private agent, like:
pool:
vmImage: 'ubuntu-latest'
Hope this helps.

Ansible Keepass integration via python script

i am very new to ansible and would like to test a few things.
I have a couple of Amazon EC2 instances and would like to install different software components on them. I don't want to have the (plaintext) credentials of the technical users inside of ansible scripts or config files. I know that it is possible to encrypt those files, but I want to try keepass for a central password management tool. So my installation scripts should read the credentials from a .kdbx (Keepass 2) database file before starting the actual installation.
Till now i wrote a basic python script for reading the .kdbx file. The script outputs a json object via:
print json.dumps(inventory, sort_keys=False)
The ouput looks like the following:
{"cdc":
{"cdc_test_server":
{"cdc_test_user":
{"username": "cdc_test_user",
"password": "password"}
}
}
}
Now I want to achieve, that the python script is executed by ansible and the key value pairs of the output are included/registered as ansible variables. So far my playbook looks as follows:
- hosts: 127.0.0.1
connection: local
tasks:
- name: "Test Playboook Functionality"
command: python /usr/local/test.py
register: pass
- debug: var=pass.stdout
- name: "Include json user output"
set_fact: passwords="{{pass.stdout | from_json}}"
- debug: " {{passwords.cdc.cdc_test_server.cdc_test_user.password}} "
The first debug generates the correct json output, but i am not able to include the variables in ansible, so that I can use them via jinja2 notation. set_fact doesn't throw an exception, but the last debug just returns a "Hello world" - message? So my question is: How do I properly include the json key value pairs as ansible variables via task?
See Ansible KeePass Lookup Plugin
ansible_user : "{{ lookup('keepass', 'path/to/entry', 'username') }}"
ansible_become_pass: "{{ lookup('keepass', 'path/to/entry', 'password') }}"
You may want to use facts.d and place your python script there to be available as a fact.
Or write a simple action plugin that returns json object to eliminate the need in stdout->from_json conversion.
Late to the party, but it seems your use case is primarily covered by keepass-inventory. And it doesn't require any playbook "magic". Disclaimer: I contribute to this non-profit.
export KDB_PATH=example.kdbx
export KDB_PASS=example
ansible all --list-hosts -i keepass-inventory.py