Looking how to translate (properly) from a bash command (orig inside a Dockerfile) to ansible task/role that will download latest aws-iam-authenticator binary and install into /usr/local/bin on Ubuntu (x64) OS.
currently I have:
curl -s https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest | grep "browser_download.url.*linux_amd64" | cut -d : -f 2,3 | tr -d '"' | wget -O /usr/local/bin/aws-iam-authenticator -qi - && chmod 555 /usr/local/bin/aws-iam-authenticator
Basically you need to write a playbook and separate that command in various tasks
Example example.yml file
- hosts: localhost
tasks:
- shell: |
curl -s https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest
register: json
- set_fact:
url: "{{ (json.stdout | from_json).assets[2].browser_download_url }}"
- get_url:
url: "{{ url }}"
dest: /usr/local/bin/aws-iam-authenticator-ansible
mode: 0555
you can execute it by doing
ansible-playbook --become example.yml
I hope this is what you're looking for ;-)
So after finding other posts that gave strong hints, information and unresolved issues, Ansible - Download latest release binary from Github repo & https://github.com/ansible/ansible/issues/27299#issuecomment-331068246. I was able to come up with the following ansible task that works for me.
- name: Get latest url for linux-amd64 release for aws-iam-authenticator
uri:
url: https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest
return_content: true
body_format: json
register: json_response
- name: Download and install aws-iam-authenticator
get_url:
url: " {{ json_response.json | to_json | from_json| json_query(\"assets[?ends_with(name,'linux_amd64')].browser_download_url | [0]\") }}"
mode: 555
dest: /usr/local/bin/aws-iam-authenticator
Note
If you're running the AWS CLI version 1.16.156 or later, then you don't need to install the authenticator. Instead, you can use the aws eks get-token command. For more information, see Create kubeconfig manually.
Related
I have built a Django application and dockerized it with Nginx, I also created a GitHub workflow to build the docker image and push it to ghcr.io.
Now I want to deploy the docker image (from ghcr.io) to the Azure virtual machine (ubuntu). but I couldn't find, how to connect azure VM to GitHub workflow and execute some commands from it.
name: CI and CD
on: [push]
env:
DOMAIN_NAME: ${{ secrets.DOMAIN_NAME }}
jobs:
build:
name: Build Docker Images
runs-on: ubuntu-latest
steps:
- name: Checkout master
uses: actions/checkout#v1
- name: Add environment variables to .env
run: |
echo DJANGO_SECRET_KEY=${{ secrets.DJANGO_SECRET_KEY }} >> .env
echo DJANGO_ALLOWED_HOSTS=${{ secrets.DJANGO_ALLOWED_HOSTS }} >> .env
echo DATABASE=postgres >> .env
echo DB_NAME=${{ secrets.DB_NAME }} >> .env
echo DB_USER=${{ secrets.DB_USER }} >> .env
echo DB_PASS='${{ secrets.DB_PASS }}' >> .env
echo DB_HOST=${{ secrets.DB_HOST }} >> .env
echo DB_PORT=${{ secrets.DB_PORT }} >> .env
echo VIRTUAL_HOST=$DOMAIN_NAME >> .env
echo VIRTUAL_PORT=8000 >> .env
echo LETSENCRYPT_HOST=$DOMAIN_NAME >> .env
echo EMAIL_HOST_USER=${{ secrets.EMAIL_HOST_USER }} >> .env
echo EMAIL_HOST_PASSWORD=${{ secrets.EMAIL_HOST_PASSWORD }} >> .env
echo DEFAULT_EMAIL=${{ secrets.DEFAULT_EMAIL }} >> .env
echo NGINX_PROXY_CONTAINER=nginx-proxy >> .env
- name: Set environment variables
run: |
echo WEB_IMAGE=ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/web >> $GITHUB_ENV
echo NGINX_IMAGE=ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/nginx >> $GITHUB_ENV
- name: Login to GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ secrets.NAMESPACE }}
password: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
- name: Pull images
run: |
docker pull $WEB_IMAGE || true
docker pull $NGINX_IMAGE || true
- name: Build images
run: docker-compose build
- name: Push images
run: |
docker push $WEB_IMAGE
docker push $NGINX_IMAGE
You can set up GitHub Action self-hosted runner on Azure VM to run GitHub Actions on the Azure VMto deploy a Django application
Firstly, you need to install the GitHub Action self-hosted runner on Azure VM by SSH into the VM and running below commands:
mkdir actions-runner && cd actions-runner
curl -o actions-runner-linux-x64-2.278.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.278.0/actions-runner-linux-x64-2.278.0.tar.gz #Extract the installer
tar xzf ./actions-runner-linux-x64-2.278.0.tar.gz
Now you need to configure the VM to communicate with your GitHub account by using below command:
./config.sh --url https://github.com/{{Yourorganization}} --token <YOURTOKENFROMGITHUB>
You will be prompted through the registration process of your GitHub Action self-hosted runner
Then install the needed dependencies for your Django Application to the VM
Now you can run your GitHub Action Workflow
Reference: Using the GitHub self-hosted runner and Azure Virtual Machines to login with a System Assigned Managed Identity | Cloud With Chris
I want to deploy aws lamda .net core project using bit bucket pipeline
I have created bitbucket-pipelines.yml like below but after build run getting error -
MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.
file code -
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the commands below to build your repository.
- export PROJECT_NAME=TestAWS/AWSLambda1/AWSLambda1.sln
- dotnet restore
- dotnet build $PROJECT_NAME
- pipe: atlassian/aws-lambda-deploy:0.2.1
variables:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: 'us-east-1'
FUNCTION_NAME: 'my-lambda-function'
COMMAND: 'update'
ZIP_FILE: 'code.zip'
project structure is like this -
The problem is here:
PROJECT_NAME=TestAWS/AWSLambda1/AWSLambda1.sln
This is the incorrect path. Bitbucket Pipelines will use a special path in the Docker image, something like /opt/atlassian/pipelines/agent/build/YOUR_PROJECT , to do a Git clone of your project.
You can see this when you click on the "Build Setup" step in the Pipelines web console:
Cloning into '/opt/atlassian/pipelines/agent/build'...
You can use a pre-defined environment variable to retrieve this path: $BITBUCKET_CLONE_DIR , as described here: https://support.atlassian.com/bitbucket-cloud/docs/variables-in-pipelines/
Consider something like this in your yml build script:
script:
- echo $BITBUCKET_CLONE_DIR # Debug: Print the $BITBUCKET_CLONE_DIR
- pwd # Debug: Print the current working directory
- find "$(pwd -P)" -name AWSLambda1.sln # Debug: Show the full file path of AWSLambda1.sln
- export PROJECT_NAME="$BITBUCKET_CLONE_DIR/AWSLambda1.sln"
- echo $PROJECT_NAME
- if [ -f "$PROJECT_NAME" ]; then echo "File exists" ; fi
# Try this if the file path is not as expected
- export PROJECT_NAME="$BITBUCKET_CLONE_DIR/AWSLambda1/AWSLambda1.sln"
- echo $PROJECT_NAME
- if [ -f "$PROJECT_NAME" ]; then echo "File exists" ; fi
I want to control Amplify deployments from GitHub Actions because Amplify auto-build
doesn't provide a GitHub Environment
doesn't watch the CI for failures and will deploy anyways, or
requires me to duplicate the CI setup and re-run it in Amplify
didn't support running a cypress job out-of-the-box
Turn off auto-build (in the App settings / General / Branches).
Add the following script and job
scripts/amplify-deploy.sh
echo "Deploy app $1 branch $2"
JOB_ID=$(aws amplify start-job --app-id $1 --branch-name $2 --job-type RELEASE | jq -r '.jobSummary.jobId')
echo "Release started"
echo "Job ID is $JOB_ID"
while [[ "$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')" =~ ^(PENDING|RUNNING)$ ]]; do sleep 1; done
JOB_STATUS="$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')"
echo "Job finished"
echo "Job status is $JOB_STATUS"
deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
steps:
- uses: actions/checkout#v2
- name: Deploy
run: ./scripts/amplify-deploy.sh xxxxxxxxxxxxx master
You could improve the script to fail if the release fails, add needed steps (e.g. lint, test), add a GitHub Environment, etc.
There's also amplify-cli-action but it didn't work for me.
Disable automatic builds:
Go to App settings > general in the AWS Amplify console and disable automatic builds there.
Go to App settings > Build Settings and create a web hook which is a curl command that will trigger a build.
Example: curl -X POST -d {} URL -H "Content-Type: application/json"
Save the URL in GitHub as a secret.
Add the curl script to the GitHub actions YAML script like this:
deploy:
runs-on: ubuntu-latest
steps:
- name: deploy
run: |
URL="${{ secrets.WEBHOOK_URL }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
Similar to answer 2 here, but I used tags instead.
Create an action like ci.yml, turn off auto-build on the staging & prod envs in amplify and create the webhook triggers.
name: CI-Staging
on:
release:
types: [prereleased]
permissions: read-all # This is required to read the secrets
jobs:
deploy-staging:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.STAGING_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
name: CI-production
on:
release:
types: [released]
permissions: read-all # This is required to read the secrets
jobs:
deploy-production:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.PRODUCTION_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
I have a github pipeline and im piping a github sercret variable into a file but i get the following error.
/home/runner/work/_temp/c6144b9a-c8e3-489a-ae97-795f592c57f0.sh: line 6: /config: Permission denied
echo: write error: Broken pipe
name: pipeline
on: [ push ]
env:
KUBECONFIG_B64DATA: ${{ secrets.KUBECONFIG_B64DATA }}
deploy:
name: Deploy
# if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#master
- name: Setup Kubectl
run: |
sudo apt-get -y install curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
sudo mkdir -p ~/.kube
sudo mv config /root/.kube/
EDIT:
I use a different folder to get passed permissions isuses (/tmp/config)
However i still struggle to pipe a github secret variable into a file because github masks the secret and im returned with an error.
base64: invalid input
I believe this is because when you echo a secret you simply get **** instead of the actual value
I spent 4 hours on this issue. Then found the solution which was actually hidden in the comments.
As pointed out by #Kay, this was caused by the white space. Doing echo "${KUBECONFIG_B64DATA// /}" | base64 --decode > /tmp/config fixed the problem for me.
Just posting this as an official answer, so that it becomes easier for someone to find it later.
Change this line:
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
To
sudo bash -c 'base64 --decode <<< "$KUBECONFIG_B64DATA" > /config'
Or
sudo tee /config > /dev/null < <(base64 --decode <<< "$KUBECONFIG_B64DATA")
I have ansible task in which I am passing the password value hard coded.
Ansible script:-
- name: Airflow
rabbitmq_user:
user: airflow
password: password
state: present
force: yes
become: yes
become_method: sudo
become_user: root
register: airflow_dbsetup
notify:
- restart rabbitmq-server
Now I have created AWS parameter store like below. How can I pass these values inside my ansible script.
Take a look at the aws_ssm plugin for ansible.
Example:
- name: Airflow
rabbitmq_user:
user: "{{ lookup('aws_ssm', 'rabbitmq_user', region='us-east-1') }}"
password: "{{ lookup('aws_ssm', 'rabbitmq_password', region='us-east-1') }}
state: present
force: yes
become: yes
become_method: sudo
become_user: root
register: airflow_dbsetup
notify:
- restart rabbitmq-server