Pulumi AccessKey.encryptedSecret won't work - amazon-web-services

I'm currently trying to set some GitHub Actions Secret, which are my AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY,
But I'm unable to get access to my AWS_SECRET_ACCESS_KEY by using the AccessKey.encryptedSecret method, even though I'm able to access AWS_ACCESS_KEY_ID or Region, or whatever other values.
This is my code:
const makeSecret = (secretName: string, value: pulumi.Input<string>) => (
new github.ActionsSecret(
secretName,
{
repository: githubRepoName,
secretName,
plaintextValue: value,
}
)
)
if (!iamUserConfig) {
const accessKey = new aws.iam.AccessKey("cra-ts-access-policy", {
user: iamUser.name
});
pulumi.all([accessKey.id, accessKey.encryptedSecret]).apply(([
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY
]) => {
makeSecret('AWS_SECRET_ACCESS_KEY', AWS_SECRET_ACCESS_KEY);
makeSecret('AWS_ACCESS_KEY_ID', AWS_ACCESS_KEY_ID);
});
}
I have tried different approaches in code, still same result.
I would run pulumi up command without any issues, but when running my github workflow on push to master I get the following error
'aws-secret-access-key' must be provided if 'aws-access-key-id' is provided
This is my .github/workflow/main.yml file
name: cra-ts
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: '14'
- name: Install
run: npm install
- name: Build
run: npm build
- name: Configure AWS Creds
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: ${{ secrets.AWS_REGION }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- env:
BUCKET_NAME: ${{ secrets.BUCKET_NAME }}
run: aws s3 sync build/ s3://$BUCKET_NAME --delete
And this is my package.json:
"devDependencies": {
"#types/node": "^14"
},
"dependencies": {
"#pulumi/pulumi": "^3.22.1",
"#pulumi/aws": "^4.34.0",
"#pulumi/awsx": "^0.32.0",
"#pulumi/github": "^4.9.1"
}
I have been stuck on this for days, if you need more details let me know so I can provide them. Appreciate the help. Thanks

Related

Github Actions - Deploy to Amazon ECS: 'task-definition.json: Unexpected token � in JSON at position 0' error

I am using Github action "Deploy to Amazon ECS" to create Docker container from Node.js backend and deploy it on ECS.
During deployment, I receive following error:
Fill in the new image ID in the Amazon ECS task definition
Run aws-actions/amazon-ecs-render-task-definition#v1
Error: /home/runner/work/project-app-strapi/project-app-strapi/task-definition.json: Unexpected token � in JSON at position 0
The task-definition.json was generated by following command (as I am not very experienced with aws ecs CLI and prefer to create the infrastructure using AWS Console):
aws ecs describe-task-definition --task-definition "arn:aws:ecs:eu-west-1:076457945931:task-definition/project-strapi:2" --profile project > task-definition.json
also checked the file and it is valid json that doesn't contain any harmful hidden characters. It looks like this:
{
"taskDefinition": {
"taskDefinitionArn": "arn:aws:ecs:eu-west-1:076457945931:task-definition/project-strapi:2",
"containerDefinitions": [{
"name": "project-app",
"image": "076457945931.dkr.ecr.eu-west-1.amazonaws.com/company/project-strapi",
"cpu": 0,
"portMappings": [{
"containerPort": 1337,
"hostPort": 1337,
"protocol": "tcp"
}],
"essential": true,
... other fields, I don't believe they are needed
}
Workflow file is same as the default aws.yml for this Github Action, no changes were made here (besides filling variables):
name: Deploy to Amazon ECS
on:
push:
branches: [ "main" ]
env:
AWS_REGION: eu-west-1 # set this to your preferred AWS region, e.g. us-west-1
ECR_REPOSITORY: company/project-strapi # set this to your Amazon ECR repository name
ECS_SERVICE: project-strapi # set this to your Amazon ECS service name
ECS_CLUSTER: project-strapi-app # set this to your Amazon ECS cluster name
ECS_TASK_DEFINITION: task-definition.json # set this to the path to your Amazon ECS task definition
# file, e.g. .aws/task-definition.json
CONTAINER_NAME: project-app # set this to the name of the container in the
# containerDefinitions section of your task definition
permissions:
contents: read
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition#v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
I tried several things, specifically various changes to formatting of json, changing the directory of the file, but the error remains.
First Download the task-defination file then update the image of the task-defination then update it to the ecs service, then you wont get any issue
- name: Download task definition
run: |
aws ecs describe-task-definition --task-definition **your task defination name** --query taskDefinition > taskdefinition.json
- name: new image in ECS taskdefinition
id: demo
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: taskdefinition.json
container-name: **your container name**
image: ${{ steps.check_files.outputs.**image** }}
- name: updating task-definition file
run: cat ${{ steps.demo.outputs.task-definition }} > taskdefinition.json

No valid credential source for S3 backend found with GitHub OIDC

I am working with Github OIDC to login to AWS and Deploy our terraform code, I am stuck on terraform init, most of the solutions on the internet point towards deleting the credentials file or providing the credentials explicitly, I can't do any of those since the credentials file does not exist with OIDC and I don't want to explicitly provide the Access_key and Secret_ID explicitly in the backend moduel either since that could lead to a security risk, Here's my GitHub Deployment file:
name: AWS Terraform Plan & Deploy
on:
push:
paths:
- "infrastructure/**"
# branches-ignore:
# - '**'
pull_request:
env:
tf_actions_working_dir: infrastructure/env/dev-slb-alpha/dev
tf_actions_working_dir_prod: infrastructure/env/prod-slb-prod/prod
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
TF_WORKSPACE: "default"
TF_ACTION_COMMENT: 1
plan: "plan.tfplan"
BUCKET_NAME : "slb-dev-terraform-state"
AWS_REGION : "us-east-1"
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- run: sleep 5 # there's still a race condition for now
- name: Clone Repository (Latest)
uses: actions/checkout#v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: us-east-1
role-to-assume: arn:aws:iam::262267462662:role/slb-dev-github-actions-role
role-session-name: GithubActionsSession
# - name: Configure AWS
# run: |
# export AWS_ROLE_ARN=arn:aws:iam::262267462662:role/slb-dev-github-actions-role
# # export AWS_WEB_IDENTITY_TOKEN_FILE=/tmp/awscreds
# export AWS_DEFAULT_REGION=us-east-1
# # echo AWS_WEB_IDENTITY_TOKEN_FILE=$AWS_WEB_IDENTITY_TOKEN_FILE >> $GITHUB_ENV
# echo AWS_ROLE_ARN=$AWS_ROLE_ARN >> $GITHUB_ENV
# echo AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> $GITHUB_ENV
- run: aws sts get-caller-identity
setup:
runs-on: ubuntu-latest
environment:
name: Dev
url: https://dev.test.com
name: checkov-action-dev
steps:
- name: Checkout repo
uses: actions/checkout#master
with:
submodules: 'true'
# - name: Add Space to Dev
# run: |
# sysconfig -r proc exec_disable_arg_limit=1
# shell: bash
- name: Run Checkov action
run: |
pip3 install checkov
checkov --directory /infrastructure
id: checkov
# uses: bridgecrewio/checkov-action#master
# with:
# directory: infrastructure/
#skip_check: CKV_AWS_1
# quiet: true
# soft_fail: true
#framework: terraform
tfsec:
name: tfsec
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# - name: Terraform security scan
# uses: aquasecurity/tfsec-pr-commenter-action#v0.1.10
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: tfsec
uses: tfsec/tfsec-sarif-action#master
with:
# sarif_file: tfsec.sarif
github_token: ${{ secrets.INPUT_GITHUB_TOKEN }}
# - name: Upload SARIF file
# uses: github/codeql-action/upload-sarif#v1
# with:
# sarif_file: tfsec.sarif
superlinter:
name: superlinter
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Scan Code Base
# uses: github/super-linter#v4
# env:
# VALIDATE_ALL_CODEBASE: false
# # DEFAULT_BRANCH: master
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# VALIDATE_TERRAFORM_TERRASCAN: false
uses: terraform-linters/setup-tflint#v1
with:
tflint_version: v0.29.0
terrascan:
name: terrascan
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Run Terrascan
id: terrascan
uses: accurics/terrascan-action#v1
with:
iac_type: "terraform"
iac_version: "v15"
policy_type: "aws"
only_warn: true
#iac_dir:
#policy_path:
#skip_rules:
#config_path:
terraform:
defaults:
run:
working-directory: ${{ env.tf_actions_working_dir}}
name: "Terraform"
runs-on: ubuntu-latest
needs: build
steps:
- name: Clone Repository (Latest)
uses: actions/checkout#v2
if: github.event.inputs.git-ref == ''
- name: Clone Repository (Custom Ref)
uses: actions/checkout#v2
if: github.event.inputs.git-ref != ''
with:
ref: ${{ github.event.inputs.git-ref }}
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
terraform_version: 1.1.2
- name: Terraform Format
id: fmt
run: terraform fmt -check
- name: Terraform Init
id: init
run: |
# # cat ~/.aws/crendentials
# # export AWS_PROFILE=pki-aws-informatics
# aws configure list-profiles
#terraform init -backend-config="bucket=slb-dev-terraform-state"
terraform init -backend-config="access_key=${{ env.AWS_ACCESS_KEY_ID}}" -backend-config="secret_key=${{ env.AWS_SECRET_ACCESS_KEY}}"
terraform init --backend-config="access_key=${{ env.AWS_ACCESS_KEY_ID}}" --backend-config="secret_key=${{ env.AWS_SECRET_ACCESS_KEY}}"
- name: Terraform Validate
id: validate
run: terraform validate -no-color
- name: Terraform Plan
id: plan
run: terraform plan -var-file="terraform.tfvars" -out=${{ env.plan }}
- uses: actions/github-script#0.9.0
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
github-token: ${{ secrets.INPUT_GITHUB_TOKEN }}
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Validation 🤖${{ steps.validate.outputs.stdout }}
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`${process.env.PLAN}\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ env.tf_actions_working_dir }}\`, Workflow: \`${{ github.workflow }}\`*`;
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
As you can see I have tried it a couple of ways and still end up with the same error, which is , I have made sure that the profile we using is correct,I also cannot proivde credentials in the init command itself,it is validating to the correct profile since it is fetching the correct arn for the profile I need it to work on, I also read somewhere that the credentials for aws profiles and S3 could be different and if that is the case how can I integrate OIDC in ythat project, not sure what or where I might be going wrong otherwise, appreciate any help or headers,
I can't give advice specific to Github (since I'm using Bitbucket), but if you're using OIDC for access to AWS from your SCM of choice the same principals apply. The S3 backend for Terraform itself doesn't allow specifying any of the normal configuration for OIDC, but you can set this with environment variables and have it work:
AWS_WEB_IDENTITY_TOKEN_FILE=<web-identity-token-file>
AWS_ROLE_ARN=arn:aws:iam::<account-id>:role/<role-name>
For Bitbucket Pipelines users:
Specify oidc: true in your pipelines config
Write the OIDC token file using e.g. echo $BITBUCKET_STEP_OIDC_TOKEN > $(pwd)/web-identity-token
Export the environment variables as above
I've split my S3 backend storage away from the account that has resources, so will need to look at configuring the actual AWS provider separately - it does have options for assume_role.web_identity_token_file and assume_role.role_arn

using Github actions with codebuild

I am using GitHub Actions with CodeBuild but whenever I run the workflow I am getting error message:
STARTING CODEBUILD
[24](https://github.com/jude![Error|563x470](upload://3wIYvCwrkHB6AnfkeJqtWd1cSWI.png)
0143143/CodeBuild/runs/3692850080?check_suite_focus=true#step:4:24)Error: The security token included in the request is invalid
name: 'GitHub Actions For CodeBuild'
on:
pull_request:
branches:
- test
env:
tf_version: 'latest'
tg_version: 'latest'
jobs:
deploy:
name: 'Build and Deploy'
runs-on: ubuntu-latest
steps:
- name: 'checkout'
uses: actions/checkout#v2
- name: configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{secrets.AWS_ACCESS_KEY_ID}}
aws-secret-access-key: ${{secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
role-duration-seconds: 3600
- name: Run CodeBuild
uses: aws-actions/aws-codebuild-run-build#v1
with:
project-name: CodeBuild
buildspec-override: stage/dev-env/buildspec.yml
env-vars-for-codebuild: |
TF_INPUT,
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY,
AWS_REGION,
ROLE_TO_ASSUME,
ROLE_DURATION_SECONDS,
env:
TF_INPUT: false
AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}}
AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: us-east-1
ROLE_TO_ASSUME: ${{ secrets.AWS_ROLE_TO_ASSUME }}
ROLE_DURATION_SECONDS: 3600[![enter image description here][1]][1]
The error message indicates that the given role or keys are not valid to execute the action.
You set access key and secret key in both the 'configure AWS credentials' and 'Run CodeBuild' steps. Looking into the Repository for 'aws-actions/aws-codebuild-run-build#v1' it seems that it only needs to be configured in the first step. Not sure how many environments you are expecting to deploy to but if there is only one, then env is redundant.
https://github.com/aws-actions/aws-codebuild-run-build
Somethink like this I expect:
steps:
- name: 'checkout'
uses: actions/checkout#v2
- name: configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{secrets.AWS_ACCESS_KEY_ID}}
aws-secret-access-key: ${{secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
role-duration-seconds: 3600
- name: Run CodeBuild
uses: aws-actions/aws-codebuild-run-build#v1
with:
project-name: CodeBuild
buildspec-override: stage/dev-env/buildspec.yml

Github Workflows CI/CD failing

My CI/CD pipeline that is using github workflows is failing giving the following error:
Error: Unable to process command '##[add-path]/opt/hostedtoolcache/aws/0.0.0/x64' successfully.
Error: The add-path command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the ACTIONS_ALLOW_UNSECURE_COMMANDS environment variable to true. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
This is my container.yml file
name: deploy-container
on:
push:
branches:
- master
- develop
paths:
- "packages/container/**"
defaults:
run:
working-directory: packages/container
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm install
- run: npm run build
- uses: chrislennon/action-aws-cli#v1.1
- run: aws s3 sync dist s3://${{ secrets.AWS_S3_BUCKET_NAME }}/container/latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Any idea why this might be happening. Thanks in advance
I know the Tutorial which this is from, use
- name: ACTIONS_ALLOW_UNSECURE_COMMANDS
run: echo 'ACTIONS_ALLOW_UNSECURE_COMMANDS=true' >> $GITHUB_ENV
before
- uses: chrislennon/action-aws-cli#v1.1
and it should work.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm install
- run: npm run build
- uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- run: aws s3 sync dist s3://${{ secrets.AWS_S3_BUCKET_NAME }}/container/latest

Assigning the value of a gcloud call to a GitHub actions variable

Objective: To grab the JOB_ID of a unknown running pipeline using gcloud and assign it to a variable to use later for when I drain the pipeline.
run: 'gcloud dataflow jobs list --region us-central1 --status active --filter DataflowToBigtable --format="value(JOB_ID.scope()):"'.
-- this will output something like : DataflowToBigtable0283384848 which is the JOB_ID I want to use. I don't know this value at start and can't assign it from secret. So if my action looks like this....
name: PI-Engine-Deploy
on:
push:
branches: [ develop, feature/deployment-workflow ]
env:
BUCKET_NAME: ret-dev-dataflow
PROJECT_ID: ret-01-dev
REGION: us-central1
PUBSUB_ID: steps.pubsub.outputs.JOB_ID. // I want to assign value here.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v2
with:
java-version: '11'
distribution: 'adopt'
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#master
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Use gcloud CLI
run: gcloud info
- name: Install Maven
run: mvn install
## Gets the id for pubsub pipeline
- name: Get the targeted pipelines
id: pubsub
run: 'gcloud dataflow jobs list --region us-central1 --status active --filter DataflowToBigtable --format="value(JOB_ID.scope())"'
- name: Drain Pubsub
run: 'gcloud dataflow jobs drain ${{ PUBSUB_ID }}' ## I want to use the assingned value here.