How to run AWS CLI command tasks in Ansible Tower - amazon-web-services

The AWS CLI command tasks in Ansible playbooks work fine form command line if AWS credentials are specified as environment variables as per boto requirements. More info can be found here Environment Variables.
But they fail to run in Tower because it exports another set of env. vars:
AWS_ACCESS_KEY
AWS_SECRET_KEY
In order to make them work in Tower just add the below in task definition:
environment:
AWS_ACCESS_KEY_ID: "{{ lookup('env','AWS_ACCESS_KEY') }}"
AWS_SECRET_ACCESS_KEY: "{{ lookup('env','AWS_SECRET_KEY') }}"
e.g. this task:
- name: Describe instances
command: aws ec2 describe-instances --region us-east-1
will transform to:
- name: Describe instances
command: aws ec2 describe-instances --region us-east-1
environment:
AWS_ACCESS_KEY_ID: "{{ lookup('env','AWS_ACCESS_KEY') }}"
AWS_SECRET_ACCESS_KEY: "{{ lookup('env','AWS_SECRET_KEY') }}"
NOTE: This only injects env.var. for the specific task - not the whole playbook!
So you have to amend this way every AWS CLI task.

Put your environment variable in a file:
export AWS_ACCESS_KEY=
export AWS_SECRET_KEY=
save the file in ~/.vars in the remote host and then in your playbook.
- name: Describe instances
command: source ~/.vars && aws ec2 describe-instances --region us-east-2
for security you could delete the file after run and copy again in the next play.

While this may not be applicable to tower we use the opensource version. Setup your .aws and/or .boto files.

Related

How to configure GithubAction to assume role to access my CodeBuild Project

I am using GithHub Actions with CodeBuild,when I run GitHub Actions,I am getting error message CodeBuild project name can not be found.The issue is that my codebuild project is in my assumed role(sandbox_role) but github action is looking for the project in the root account that i configured as environment variable in github secret.How can I configure my GitHub Action workflows to first connect to the root then from there assume sandbox_role to get my codebuild project?below is my code sample..I am using terragrunt/terraform code to provision my environment
name:'GitHub Actions For CodeBuild'
on:
pull_request:
branches:
- staging
jobs:
CodeBuild:
name:'Build'
runs-on:ubuntu-latest
steps:
-name:'checkout'
uses:actions/checkout#v2
-name:configure AWS credentials
uses:aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{secrets.AWS_ACCESS_KEY_ID}}
aws-secret-access-key: ${{secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
-name:Run CodeBuild
uses: aws-actions/aws-codebuild-run-build#v1
with:
project-name: CodeBuild
buildspec-override: staging/buildspec.yml
env-vars-for-codebuild: |
TF_INPUT,
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY,
AWS_REGION,
env:
TF_INPUT: false
AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}}
AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: us-east-1
Not pretty sure if this works, but whenever I use roles I also pass the role ARN "to tell" AWS which is the role you are using and which permissions it should have.
This role ARN can be added in the configuration and credentials file:
Configuration:
region = us-east-1
output = json
role_arn = arn:aws:iam::account_id:role/role-name
source_profile=default
Credentials:
[default]
aws_access_key_id="your_key_id"
aws_secret_access_key="your_access_key"
aws_session_token="your_session_token"
source_profile=default

How to perform cross account provisioning using Ansible?

I have run a Terraforms script to provision EKS cluster using cross-account provisioning. So I ran Terraform scripts on account 1 and EKS got created in account 2.
assume_role helps me achieve this:
provider "aws" {
assume_role {
role_arn = var.role_arn
}
}
Is there an equivalent of Terraform's assume_role in Ansible?
I need to run a playbook to install Calico on the EKS on account 2 - but I would like to run the playbook from inside a null_resource in the Terraform code above.
resource "aws_eks_cluster" "eks_for_account_2" {
...
}
resource "null_resource" "calico_provisioner" {
provisioner "local-exec" {
command = "<command to install Calico on eks_for_account_2. \
I would like this command to run on account 2.
But on contrary to what I'm trying to achieve, this command gets executed on account 1 \
(instead of getting executed on account 2)
How to run this on account 2? Please advise>"
}
}
I am not able to find any articles which explain this.
Please help.
Is there an equivalent of Terraform's assume_role in Ansible?
The sts_assume_role: task will do that, and it's possible that if you run that in a separate play, you can set the environment variables that the AWS tasks look for and not have to pass it around all the time (but be forewarned I haven't verified this works for all tasks)
- name: assume role play
hosts: localhost
gather_facts: no
tasks:
- sts_assume_role:
role_arn: etc etc
role_session_name: etc-etc
register: my_role_creds
- name: now run with those creds
hosts: whatever-you-want
environment:
AWS_ACCESS_KEY_ID: '{{ hostvars.localhost.my_role_creds.sts_creds.access_key }}'
AWS_SECRET_ACCESS_KEY: '{{ hostvars.localhost.my_role_creds.sts_creds.secret_key }}'
AWS_SECURITY_TOKEN: '{{ hostvars.localhost.my_role_creds.sts_creds.session_token }}'
tasks:
- name: show who I am
aws_caller_info:

How to configure / use AWS CLI in GitHub Actions?

I'd like to run commands like aws amplify start-job in GitHub Actions. I understand the AWS CLI is pre-installed but I'm not sure how to configure it.
In particular, I'm not sure how the env vars are named for all configuration options as some docs only mention AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY but nothing for the region and output settings.
I recommend using this AWS action for setting all the AWS region and credentials environment variables in the GitHub Actions environment. It doesn't set the output env vars so you still need to do that, but it has nice features around making sure that credential env vars are masked in the output as secrets, supports assuming a role, and provides your account ID if you need it in other actions.
https://github.com/marketplace/actions/configure-aws-credentials-action-for-github-actions
I could provide the following secrets and env vars and then use the commands:
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
E.g.
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: eu-west-1
AWS_DEFAULT_OUTPUT: json
run: aws amplify start-job --app-id xxx --branch-name master --job-type RELEASE
In my experience, the out-of-box AWS CLI tool coming from action runner just works fine.
But there would be some time that you'd prefer to use credentials file (like terraform AWS provider), and this is example for it.
This would base64 decode the encoded file and use for the following steps.
- name: Write into file
id: write_file
uses: timheuer/base64-to-file#v1.0.3
with:
fileName: 'myTemporaryFile.txt'
encodedString: ${{ secrets.AWS_CREDENTIALS_FILE_BASE64 }}

Using Gitihub actions for CI CD on aws ec2 machine?

i am new github actions workflow and was wondering that is it possible that i set my ec2 machine directly for CI and CD after every push.
I have seen that it is possible with ECS , but i wanted a straight forward solution as we are trying this out on our Dev environment we don't want over shoot our budget.
is it possible , if yes how can i achieve it ?
If you build your code in GitHub Actions, and just want to copy the package over existing EC2, you can use SCP files action plugin
https://github.com/marketplace/actions/scp-files
- name: copy file via ssh key
uses: appleboy/scp-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
port: ${{ secrets.PORT }}
key: ${{ secrets.KEY }}
source: "tests/a.txt,tests/b.txt"
target: "test"
If you have any other AWS resource which interacts with EC2 (or any other AWS service) and you want to use AWS CLI, you can use AWS Credentials Action
https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.TEST_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.TEST_AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Copy files to the test website with the AWS CLI
run: |
aws s3 sync . s3://my-s3-test-website-bucket
Here there is a nice article. The goal of article is to build a CI/CD stack with Github Actions + AWS EC2, CodeDeploy and S3.

Authfailure: AWS was not able to validate the provided access credentials

I am trying to create my Gitlab CI/CD pipeline with AWS. The goal is to Terminate the Existing EC2 Instance, Run a new instance from a template, then Associate an Elastic IP to the new EC2. The runner I'm using is a Docker runner using the python:latest image. When I run my CI/CD pipeline I get
An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS was not able to validate the provided access credentials
My .gitlab-ci.yml is as follows:
stages:
- build
AWS_Install:
image: python:latest
stage: build
tags:
- Docker
script:
- pip install awscli
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
- echo "running script :)"
- OLDEC2=$(aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" --query "Reservations[*].Instances[*].[InstanceId]" --output text)
- aws ec2 terminate-instances --instance-ids "$OLDEC2"
- sleep 200.0
- aws ec2 run-instances --launch-template LaunchTemplateId=[launch-template-id],Version=12
- sleep 120.0
- NEWEC2=$(aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" --query "Reservations[*].Instances[*].[InstanceId]" --output text)
- aws ec2 associate-address --allocation-id [allocation-id] --instance-id "$NEWEC2" --allow-reassociation
What I've checked/tried:
- AWS credentials: They are correct and valid
- aws configure: Everything sets correctly (checked using aws configure get)
- Ensured UNIX line endings were being used
- Adding a variable section to the YAML file to include environment variables
- Hardcoding credential values
- New user on AWS with all necessary credentials
- Using export to get the environment variables
- Running everything in one script rather than having a before script
- Having multiple stages/Jobs
Turns out the solution was to use a public runner on GitLab rather than a customer one.