Hi all I have the following scenario. I need to be able to run a shell command and loop the command through the device list with the commands listed in commands. I've tried a dict and a list but have had no joy. Does anyone have any ideas how this can be done. I need to be able to add and remove from both lists too.
- hosts: "{{ devices }}"
gather_facts: no
vars:
devices:
- device1
- device2
commands:
- command1
- command2
tasks:
- shell: DO a task in here on devices in device list and commands in command list"
Related
So I have been at this for days now almost and it is driving me crazy. Based on other posts, I have set up the following cloudbuild.yaml :
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- -t
- gcr.io/${INSTANCE_NAME}
- .
- name: gcr.io/cloud-builders/docker
args:
- push
- gcr.io/${INSTANCE_NAME}
- name: 'gcr.io/${INSTANCE_NAME}'
entrypoint: sh
env:
- DATABASE_URL=postgresql://USER:PASSWORD#localhost/DATABASE?host=/cloudsql/CONNECTION_NAME
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=CONNECTION_NAME=tcp:5432 & sleep 3
npx prisma migrate deploy
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args:
- run
- deploy
- backend
- --image
- gcr.io/${INSTANCE_NAME}
- --region
- europe-west1
images:
- gcr.io/${INSTANCE_NAME}
When running this, I am greeted by:
Step #2: 2023/02/05 13:00:49 Listening on 127.0.0.1:5432 for CONNECTION_NAME
Step #2: 2023/02/05 13:00:49 Ready for new connections
Step #2: 2023/02/05 13:00:49 Generated RSA key in 118.117245ms
Step #2: npm WARN exec The following package was not found and will be installed: prisma#4.9.0
Step #2: Prisma schema loaded from prisma/schema.prisma
Step #2: Datasource "db": PostgreSQL database "develop", schema "public" at "localhost"
Step #2:
Step #2: Error: P1001: Can't reach database server at `/cloudsql/CONNECTION_NAME`:`5432`
Step #2:
Step #2: Please make sure your database server is running at `/cloudsql/CONNECTION_NAME`:`5432`.
So even with using the database url hardcoded and with the Cloud SQL proxy working, i am STILL getting this error. What am I missing?
Check for the container-name in .env file and change it to postgres as it would replace name in connection string as discussed here
Or try the following format if you don’t want to hardcode IP address
DB_USER=dbuser
DB_PASS=dbpass
DB_HOST=localhost
DB_PORT=5432
CLOUD_SQL_CONNECTION_NAME=/cloudsql/gcp-project-id:europe-west3:db-instance-name
DATABASE_URL=postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:${DB_PORT}/${DB_BASE}?host=${CLOUD_SQL_CONNECTION_NAME}
If you have public IP try connecting by unix socket
Just curious, why isn't there a helm cloud builder officially supported? It seems like a very common requirement, yet I'm not seeing one in the list here:
https://github.com/GoogleCloudPlatform/cloud-builders
I was previously using alpine/helm in my cloudbuild.yaml for my helm deployment as follows:
steps:
# Build app image
- name: gcr.io/cloud_builders/docker
args:
- build
- -t
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
- ./cloudbuild/$_CONTAINER_NAME/
# Push my-app image to Google Cloud Registry
- name: gcr.io/cloud-builders/docker
args:
- push
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
# Configure a kubectl workspace for this project
- name: gcr.io/cloud-builders/kubectl
args:
- cluster-info
env:
- CLOUDSDK_COMPUTE_REGION=$_CUSTOM_REGION
- CLOUDSDK_CONTAINER_CLUSTER=$_CUSTOM_CLUSTER
- KUBECONFIG=/workspace/.kube/config
# Deploy with Helm
- name: alpine/helm
args:
- upgrade
- -i
- $_CONTAINER_NAME
- ./cloudbuild/$_CONTAINER_NAME/k8s
- --set
- image.repository=$_IMAGE_REPO/$_CONTAINER_NAME,image.tag=$COMMIT_SHA
- -f
- ./cloudbuild/$_CONTAINER_NAME/k8s/values.yaml
env:
- KUBECONFIG=/workspace/.kube/config
- TILLERLESS=false
- TILLER_NAMESPACE=kube-system
- USE_GKE_GCLOUD_AUTH_PLUGIN=True
timeout: 1200s
substitutions:
# substitutionOption: ALLOW_LOOSE
# dynamicSubstitutions: true
_CUSTOM_REGION: us-east1
_CUSTOM_CLUSTER: demo-gke
_IMAGE_REPO: us-east1-docker.pkg.dev/fakeproject/my-docker-repo
_CONTAINER_NAME: app2
options:
logging: CLOUD_LOGGING_ONLY
# In this option we are providing the worker pool name that we have created in the previous step
workerPool:
'projects/fakeproject/locations/us-east1/workerPools/cloud-build-pool'
And this was working with no issues. Then recently it just started failing with the following error so I'm guessing a change was made recently:
Error: Kubernetes cluster unreachable: Get "https://10.10.2.2/version": getting credentials: exec: executable gke-gcloud-auth-plugin not found"
I get this error regularly on VM's and can workaround it by setting USE_GKE_GCLOUD_AUTH_PLUGIN=True, but that does not seem to fix the issue here if I add it to the env section. So I'm looking for recommendations on how to use helm with Cloud Build. alpine/helm was just something I randomly tried and was working for me up until now, but there's probably better solutions out there.
Thanks!
How can I deploy a directory to a FTP or SSH server, with a trigger and cloudbuild.yaml?
So far I can already generate a listing of the files which I'd like to upload:
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |-
find $_UPLOAD_DIRNAME -exec echo {} >> batch.txt \;
cat ./batch.txt
env:
...
I've came to the conclusion, that I don't want the FTP anti-pattern
and have therefore written an alternate SSH cloudbuild.yaml:
generate a new pair of RSA keys.
use the private key for SSH login.
recursively upload the directory with scp.
run remote commands with ssh.
It logs in as user root, therefore remote /etc/ssh/sshd_config needs PermitRootLogin yes.
My variable substitutions meanwhile look alike this:
And this would be the cloudbuild.yaml, which generally demonstrates how to set up SSH keys:
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:latest'
entrypoint: 'bash'
args:
- '-c'
- |-
echo Deploying $_UPLOAD_DIRNAME # $SHORT_SHA
gcloud config set compute/zone $_COMPUTE_ZONE
gcloud config set project $PROJECT_ID
mkdir -p /builder/home/.ssh
gcloud compute config-ssh
gcloud compute scp --ssh-key-expire-after=$_SSH_KEY_EXPIRE_AFTER --scp-flag="${_SSH_FLAG}" --recurse ./$_UPLOAD_DIRNAME $_COMPUTE_INSTANCE:$_REMOTE_PATH
gcloud compute ssh $_COMPUTE_INSTANCE --ssh-key-expire-after=$_SSH_KEY_EXPIRE_AFTER --ssh-flag="${_SSH_FLAG}" --command="${_SSH_COMMAND}"
env:
- '_COMPUTE_ZONE=$_COMPUTE_ZONE'
- '_COMPUTE_INSTANCE=$_COMPUTE_INSTANCE'
- '_UPLOAD_DIRNAME=$_UPLOAD_DIRNAME'
- '_REMOTE_PATH=$_REMOTE_PATH'
- '_SSH_FLAG=$_SSH_FLAG'
- '_SSH_COMMAND=$_SSH_COMMAND'
- '_SSH_KEY_EXPIRE_AFTER=$_SSH_KEY_EXPIRE_AFTER'
- 'PROJECT_ID=$PROJECT_ID'
- 'SHORT_SHA=$SHORT_SHA'
I've managed to deploy to FTP with ncftp:
first patch /etc/apt/sources.list.
then install ncftp with apt-get.
create the file ~/.ncftp with variable substitutions.
optional step: replace text in files with sed.
recursively upload the directory with ncftpput.
Here's my cloudbuild.yaml (it is working, but the next answer might offer a better solution):
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |-
echo Deploying ${_UPLOAD_DIRNAME} # ${SHORT_SHA}
echo to ftp://${_REMOTE_ADDRESS}${_REMOTE_PATH}
echo "deb http://archive.ubuntu.com/ubuntu/ focal universe" > /etc/apt/sources.list
apt-get update -y && apt-get install -y ncftp
cat << EOF > ~/.ncftp
host $_REMOTE_ADDRESS
user $_FTP_USERNAME
pass $_FTP_PASSWORD
EOF
# sed -i "s/##_GIT_COMMIT_##/${SHORT_SHA}/g" ./${_UPLOAD_DIRNAME}/plugin.php
ncftpput -f ~/.ncftp -R $_REMOTE_PATH $_UPLOAD_DIRNAME
env:
- '_UPLOAD_DIRNAME=$_UPLOAD_DIRNAME'
- '_REMOTE_ADDRESS=$_REMOTE_ADDRESS'
- '_REMOTE_PATH=$_REMOTE_PATH'
- '_FTP_USERNAME=$_FTP_USERNAME'
- '_FTP_PASSWORD=$_FTP_PASSWORD'
- 'SHORT_SHA=$SHORT_SHA'
Where _REMOTE_PATH is eg. /wp-content/plugins (the variable requires at least one slash) and the _UPLOAD_DIRNAME is the name of the directory within the local Git repository, with no slashes.
The template gets copied normally in /etc/nginx/sites-enabled
On running this command: ansible localhost -b -m copy -a "src=/abc/efg/ngs/templates/sites-enabled.j2 dest=/etc/nginx/sites-enabled"
The file gets copied.
:/etc/nginx/sites-enabled$ ls gives the output as default & sites-enabled.j2.
How to copy the template provided in /ngs to /etc/nginx/sites-enabled/default and how to start the nginx using adhoc the commands?
What I understood from your question is that:
You want to copy multiple template files from
src = "/abc/efg/ngs/" to dest = "/etc/nginx/sites-enabled/default".
You want to restart Nginx.
To achieve this using Adhoc command:
COPY FILES: ansible localhost -b -m copy -a "src=/abc/efg/ngs/templates dest=/etc/nginx/sites-enabled/default/
START NGINX USING COMMAND MODULE: ansible localhost -m command -a "systemctl start nginx"
START NGINX USING SHELL MODULE: ansible localhost -m shell -a "systemctl start nginx"
Ref to ad-hoc commands: https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html
To achieve this using the playbook command:
- name: Copying files from source to destination
copy:
src: /abc/efg/ngs/templates
dest: /etc/nginx/sites-enabled/default/
owner: foo
group: foo
mode: 0644
- name: Starting nginx
command: systemctl start nginx
Ref: https://docs.ansible.com/ansible/2.4/copy_module.html
Ref: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/command_module.html
But rather I would suggest If you learn about handlers as they are very much helpful to do these kinds of tasks when you want to restart/reload any service only when a change happens.
Ref: https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html
If you asked something else, let me know.
I'm trying to provision my infrastructure on AWS using Ansible playbooks. I have the instance, and am able to provision docker-engine, docker-py, etc. and, I swear, yesterday this worked correctly and I haven't changed the code since.
The relevant portion of my playbook is:
- name: Ensure AWS CLI is available
pip:
name: awscli
state: present
when: aws_deploy
- block:
- name: Add .boto file with AWS credentials.
copy:
content: "{{ boto_file }}"
dest: ~/.boto
when: aws_deploy
- name: Log in to docker registry.
shell: "$(aws ecr get-login --region us-east-1)"
when: aws_deploy
- name: Remove .boto file with AWS credentials.
file:
path: ~/.boto
state: absent
when: aws_deploy
- name: Create docker network
docker_network:
name: my-net
- name: Start Container
docker_container:
name: example
image: "{{ docker_registry }}/example"
pull: true
restart: true
network_mode: host
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone
My {{ docker_registry }} is set to my-acct-id.dkr.ecr.us-east-1.amazonaws.com and the result I'm getting is:
"msg": "Error pulling my-acct-id.dkr.ecr.us-east-1.amazonaws.com/example - code: None message: Get http://: http: no Host in request URL"
However, as mentioned, this worked correctly last night. Since then I've made some VPC/subnet changes, but I'm able to ssh to the instance, and run docker pull my-acct-id.dkr.ecr.us-east-1.amazonaws.com/example with no issues.
Googling has led me not very far as I can't seem to find other folks with the same error. I'm wondering what changed, and how I can fix it! Thanks!
EDIT: Versions:
ansible - 2.2.0.0
docker - 1.12.3 6b644ec
docker-py - 1.10.6
I had the same problem. Downgrading docker-compose pip image on that host machine from 1.9.0 to 1.8.1 solved the problem.
- name: Install docker-compose
pip: name=docker-compose version=1.8.1
Per this thread: https://github.com/ansible/ansible-modules-core/issues/5775, the real culprit is requests. This fixes it:
- name: fix requests
pip: name=requests version=2.12.1 state=forcereinstall