Objective: To grab the JOB_ID of a unknown running pipeline using gcloud and assign it to a variable to use later for when I drain the pipeline.
run: 'gcloud dataflow jobs list --region us-central1 --status active --filter DataflowToBigtable --format="value(JOB_ID.scope()):"'.
-- this will output something like : DataflowToBigtable0283384848 which is the JOB_ID I want to use. I don't know this value at start and can't assign it from secret. So if my action looks like this....
name: PI-Engine-Deploy
on:
push:
branches: [ develop, feature/deployment-workflow ]
env:
BUCKET_NAME: ret-dev-dataflow
PROJECT_ID: ret-01-dev
REGION: us-central1
PUBSUB_ID: steps.pubsub.outputs.JOB_ID. // I want to assign value here.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v2
with:
java-version: '11'
distribution: 'adopt'
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#master
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Use gcloud CLI
run: gcloud info
- name: Install Maven
run: mvn install
## Gets the id for pubsub pipeline
- name: Get the targeted pipelines
id: pubsub
run: 'gcloud dataflow jobs list --region us-central1 --status active --filter DataflowToBigtable --format="value(JOB_ID.scope())"'
- name: Drain Pubsub
run: 'gcloud dataflow jobs drain ${{ PUBSUB_ID }}' ## I want to use the assingned value here.
Related
I'm creating workflow:
where environment variables are sets between workflow and jobs:
While accessing the workflow based env, I'm getting error:
Unrecognized named-value: 'env'. Located at position 1 within expression: env.ACCOUNT_ID
all I want to access the job based env to each steps while referring to workflow based env.
The workflow
env:
AWS_REGION: ${{ vars.AWS_REGION }}
ACCOUNT_ID: ${{ secrets.TRAINING_ACCOUNT_ID }}
jobs:
dev:
runs-on: ubuntu-latest
env:
ECR_REGISTRY: ${{ env.ACCOUNT_ID }}.dkr.ecr.${{env.AWS_REGION}}.amazonaws.com
steps:
- name: build
run: |
aws --region ${{env.AWS_REGION}} ecr get-login-password | docker login --username AWS --password-stdin ${{env.ECR_REGISTRY}}
I found this as answer after a quite trial, if someone has the official answer, please post it.
But to work in my case, this is the hack:
env:
AWS_REGION: ${{ vars.AWS_REGION }}
ACCOUNT_ID: ${{ secrets.TRAINING_ACCOUNT_ID }}
jobs:
dev:
runs-on: ubuntu-latest
env:
ECR_REGISTRY: $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
steps:
- name: build
run: |
aws --region ${{env.AWS_REGION}} ecr get-login-password | docker login --username AWS --password-stdin ${{env.ECR_REGISTRY}}
I am working with Github OIDC to login to AWS and Deploy our terraform code, I am stuck on terraform init, most of the solutions on the internet point towards deleting the credentials file or providing the credentials explicitly, I can't do any of those since the credentials file does not exist with OIDC and I don't want to explicitly provide the Access_key and Secret_ID explicitly in the backend moduel either since that could lead to a security risk, Here's my GitHub Deployment file:
name: AWS Terraform Plan & Deploy
on:
push:
paths:
- "infrastructure/**"
# branches-ignore:
# - '**'
pull_request:
env:
tf_actions_working_dir: infrastructure/env/dev-slb-alpha/dev
tf_actions_working_dir_prod: infrastructure/env/prod-slb-prod/prod
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
TF_WORKSPACE: "default"
TF_ACTION_COMMENT: 1
plan: "plan.tfplan"
BUCKET_NAME : "slb-dev-terraform-state"
AWS_REGION : "us-east-1"
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- run: sleep 5 # there's still a race condition for now
- name: Clone Repository (Latest)
uses: actions/checkout#v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: us-east-1
role-to-assume: arn:aws:iam::262267462662:role/slb-dev-github-actions-role
role-session-name: GithubActionsSession
# - name: Configure AWS
# run: |
# export AWS_ROLE_ARN=arn:aws:iam::262267462662:role/slb-dev-github-actions-role
# # export AWS_WEB_IDENTITY_TOKEN_FILE=/tmp/awscreds
# export AWS_DEFAULT_REGION=us-east-1
# # echo AWS_WEB_IDENTITY_TOKEN_FILE=$AWS_WEB_IDENTITY_TOKEN_FILE >> $GITHUB_ENV
# echo AWS_ROLE_ARN=$AWS_ROLE_ARN >> $GITHUB_ENV
# echo AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> $GITHUB_ENV
- run: aws sts get-caller-identity
setup:
runs-on: ubuntu-latest
environment:
name: Dev
url: https://dev.test.com
name: checkov-action-dev
steps:
- name: Checkout repo
uses: actions/checkout#master
with:
submodules: 'true'
# - name: Add Space to Dev
# run: |
# sysconfig -r proc exec_disable_arg_limit=1
# shell: bash
- name: Run Checkov action
run: |
pip3 install checkov
checkov --directory /infrastructure
id: checkov
# uses: bridgecrewio/checkov-action#master
# with:
# directory: infrastructure/
#skip_check: CKV_AWS_1
# quiet: true
# soft_fail: true
#framework: terraform
tfsec:
name: tfsec
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# - name: Terraform security scan
# uses: aquasecurity/tfsec-pr-commenter-action#v0.1.10
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: tfsec
uses: tfsec/tfsec-sarif-action#master
with:
# sarif_file: tfsec.sarif
github_token: ${{ secrets.INPUT_GITHUB_TOKEN }}
# - name: Upload SARIF file
# uses: github/codeql-action/upload-sarif#v1
# with:
# sarif_file: tfsec.sarif
superlinter:
name: superlinter
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Scan Code Base
# uses: github/super-linter#v4
# env:
# VALIDATE_ALL_CODEBASE: false
# # DEFAULT_BRANCH: master
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# VALIDATE_TERRAFORM_TERRASCAN: false
uses: terraform-linters/setup-tflint#v1
with:
tflint_version: v0.29.0
terrascan:
name: terrascan
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Run Terrascan
id: terrascan
uses: accurics/terrascan-action#v1
with:
iac_type: "terraform"
iac_version: "v15"
policy_type: "aws"
only_warn: true
#iac_dir:
#policy_path:
#skip_rules:
#config_path:
terraform:
defaults:
run:
working-directory: ${{ env.tf_actions_working_dir}}
name: "Terraform"
runs-on: ubuntu-latest
needs: build
steps:
- name: Clone Repository (Latest)
uses: actions/checkout#v2
if: github.event.inputs.git-ref == ''
- name: Clone Repository (Custom Ref)
uses: actions/checkout#v2
if: github.event.inputs.git-ref != ''
with:
ref: ${{ github.event.inputs.git-ref }}
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
terraform_version: 1.1.2
- name: Terraform Format
id: fmt
run: terraform fmt -check
- name: Terraform Init
id: init
run: |
# # cat ~/.aws/crendentials
# # export AWS_PROFILE=pki-aws-informatics
# aws configure list-profiles
#terraform init -backend-config="bucket=slb-dev-terraform-state"
terraform init -backend-config="access_key=${{ env.AWS_ACCESS_KEY_ID}}" -backend-config="secret_key=${{ env.AWS_SECRET_ACCESS_KEY}}"
terraform init --backend-config="access_key=${{ env.AWS_ACCESS_KEY_ID}}" --backend-config="secret_key=${{ env.AWS_SECRET_ACCESS_KEY}}"
- name: Terraform Validate
id: validate
run: terraform validate -no-color
- name: Terraform Plan
id: plan
run: terraform plan -var-file="terraform.tfvars" -out=${{ env.plan }}
- uses: actions/github-script#0.9.0
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
github-token: ${{ secrets.INPUT_GITHUB_TOKEN }}
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Validation 🤖${{ steps.validate.outputs.stdout }}
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`${process.env.PLAN}\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ env.tf_actions_working_dir }}\`, Workflow: \`${{ github.workflow }}\`*`;
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
As you can see I have tried it a couple of ways and still end up with the same error, which is , I have made sure that the profile we using is correct,I also cannot proivde credentials in the init command itself,it is validating to the correct profile since it is fetching the correct arn for the profile I need it to work on, I also read somewhere that the credentials for aws profiles and S3 could be different and if that is the case how can I integrate OIDC in ythat project, not sure what or where I might be going wrong otherwise, appreciate any help or headers,
I can't give advice specific to Github (since I'm using Bitbucket), but if you're using OIDC for access to AWS from your SCM of choice the same principals apply. The S3 backend for Terraform itself doesn't allow specifying any of the normal configuration for OIDC, but you can set this with environment variables and have it work:
AWS_WEB_IDENTITY_TOKEN_FILE=<web-identity-token-file>
AWS_ROLE_ARN=arn:aws:iam::<account-id>:role/<role-name>
For Bitbucket Pipelines users:
Specify oidc: true in your pipelines config
Write the OIDC token file using e.g. echo $BITBUCKET_STEP_OIDC_TOKEN > $(pwd)/web-identity-token
Export the environment variables as above
I've split my S3 backend storage away from the account that has resources, so will need to look at configuring the actual AWS provider separately - it does have options for assume_role.web_identity_token_file and assume_role.role_arn
I am trying to build a docker image and pushing it to gcp artifactory. But it is failing in the github actions. Here is my workflow yaml file:
on:
push:
branches:
- main
- featurev1
name: Build and Deploy to Cloud Run
env:
REGION: 'europe-west1'
PROJECT_ID: 'myproject'
CLUSTER_NAME: 'myproject-cluster'
LOCATION: 'europe-west1'
ZONE: 'europe-west1'
ARTIFACT_REGISTRY: 'myproject-cust-seg'
TARGET_ENV: 'INT'
NAMESPACE: 'integration'
jobs:
deploy:
runs-on: [ self-hosted ]
# Add "id-token" with the intended permissions.
#permissions:
# contents: 'read'
# id-token: 'write'
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup gcloud environment
uses: google-github-actions/setup-gcloud#v0
with:
service_account_key: ${{ secrets.INT_PLATFORM_SERVICE_ACCOUNT_KEY }}
project_id: ${{ env.PROJECT_ID }}
# Alternative option - authentication via credentials json
#- id: 'auth'
# uses: 'google-github-actions/auth#v0'
# with:
# credentials_json: ${{ secrets.INT_PLATFORM_SERVICE_ACCOUNT_KEY }}
- name: Authorize Docker push
run: gcloud auth configure-docker
- name: Build and Push Container
env:
GIT_TAG: ${{ github.run_id }}
run: |-
docker build -t $LOCATION-docker.pkg.dev/$PROJECT_ID/$ARTIFACT_REGISTRY/custapi:$TARGET_ENV-v$GIT_TAG .
docker push $LOCATION-docker.pkg.dev/$PROJECT_ID/$ARTIFACT_REGISTRY/custapi:$TARGET_ENV-v$GIT_TAG
But I have an error:
Run google-github-actions/setup-gcloud#v0
Error: google-github-actions/setup-gcloud failed with: failed to execute command gcloud --quiet config set project myproject: WARNING: Could not setup log file in /home/master/.config/gcloud/logs, (Could not create directory [/home/master/.config/gcloud/logs/2022.02.10]: Permission denied.
Please verify that you have permissions to write to the parent directory..
The configuration directory may not be writable. To learn more, see https://cloud.google.com/sdk/docs/configurations#creating_a_configuration
ERROR: (gcloud.config.set) Failed to create the default configuration. Ensure your have the correct permissions on: [/home/master/.config/gcloud/configurations].
Could not create directory [/home/master/.config/gcloud/configurations]: Permission denied.
Please verify that you have permission to write to the parent directory.
Right now I have used the service key json file as secret in GitHub actions as keyless authentication will be done in the near future, after the successful pilot of phase 1. So you could find the details above.
Here I have mentioned runs-on as "self-hosted" which is our onpremise github action runner.
I'm currently trying to set some GitHub Actions Secret, which are my AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY,
But I'm unable to get access to my AWS_SECRET_ACCESS_KEY by using the AccessKey.encryptedSecret method, even though I'm able to access AWS_ACCESS_KEY_ID or Region, or whatever other values.
This is my code:
const makeSecret = (secretName: string, value: pulumi.Input<string>) => (
new github.ActionsSecret(
secretName,
{
repository: githubRepoName,
secretName,
plaintextValue: value,
}
)
)
if (!iamUserConfig) {
const accessKey = new aws.iam.AccessKey("cra-ts-access-policy", {
user: iamUser.name
});
pulumi.all([accessKey.id, accessKey.encryptedSecret]).apply(([
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY
]) => {
makeSecret('AWS_SECRET_ACCESS_KEY', AWS_SECRET_ACCESS_KEY);
makeSecret('AWS_ACCESS_KEY_ID', AWS_ACCESS_KEY_ID);
});
}
I have tried different approaches in code, still same result.
I would run pulumi up command without any issues, but when running my github workflow on push to master I get the following error
'aws-secret-access-key' must be provided if 'aws-access-key-id' is provided
This is my .github/workflow/main.yml file
name: cra-ts
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: '14'
- name: Install
run: npm install
- name: Build
run: npm build
- name: Configure AWS Creds
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: ${{ secrets.AWS_REGION }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- env:
BUCKET_NAME: ${{ secrets.BUCKET_NAME }}
run: aws s3 sync build/ s3://$BUCKET_NAME --delete
And this is my package.json:
"devDependencies": {
"#types/node": "^14"
},
"dependencies": {
"#pulumi/pulumi": "^3.22.1",
"#pulumi/aws": "^4.34.0",
"#pulumi/awsx": "^0.32.0",
"#pulumi/github": "^4.9.1"
}
I have been stuck on this for days, if you need more details let me know so I can provide them. Appreciate the help. Thanks
I have followed this question How can I connect GitHub actions with AWS deployments without using a secret key?.
however i am trying to go one step further by dpeloying a lambda function using serverless.
what i have tried so far.
name: For Production
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Configure AWS
run: |
sleep 5 # Need to have a delay to acquire this
export AWS_ROLE_ARN=arn:aws:iam::xxxxxxx:role/my-role
export AWS_WEB_IDENTITY_TOKEN_FILE=/tmp/awscreds
export AWS_DEFAULT_REGION=ap-southeast-1
echo AWS_WEB_IDENTITY_TOKEN_FILE=$AWS_WEB_IDENTITY_TOKEN_FILE >> $GITHUB_ENV
echo AWS_ROLE_ARN=$AWS_ROLE_ARN >> $GITHUB_ENV
echo AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> $GITHUB_ENV
curl -H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
"$ACTIONS_ID_TOKEN_REQUEST_URL&audience=githubactions" \
| jq -r '.value' > $AWS_WEB_IDENTITY_TOKEN_FILE
sls deploy --stage prod --verbose
working-directory: './backend-operations'
# - name: Deploy to AWS
# run: serverless deploy --stage prod --verbose
# working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}
I solved it using this using aws-actions/configure-aws-credentials github actions, as it sets temporary access key and id to environment.
Hence no need of creating aws programmticv keys from here on.
Note:- latest update of github OIDC has changed its domain name -> https://token.actions.githubusercontent.com
# This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: Production-Deployment
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#master
with:
aws-region: ap-southeast-1
role-to-assume: ${{secrets.ROLE_ARN}}
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Serverless Authentication
run: sls config credentials --provider aws --key ${{ env.AWS_ACCESS_KEY_ID }} --secret ${{ env.AWS_SECRET_ACCESS_KEY }}
- name: Deploy to AWS
run: serverless deploy --stage prod --verbose
working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}