error configuring S3 Backend: no valid credential sources for S3 Backend found - amazon-web-services

I've been trying to add CI/CD pipeline circleci to my AWS project written in Terraform.
The problem is, terraform init plan apply works in my local machine, but it throws this error in CircleCI.
Error -
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
My circleCi config is this -
version: 2.1
orbs:
python: circleci/python#1.5.0
# terraform: circleci/terraform#3.1.0
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
# Invoke jobs via workflows
workflows:
.......
And my init.sh is -
cd ./Terraform
echo "arg: $1"
if [[ "$1" == "dev" || "$1" == "stage" || "$1" == "prod" ]];
then
echo "environement: $1"
terraform init -migrate-state -backend-config=backend.$1.conf -var-file=terraform.$1.tfvars
else
echo "Wrong Argument"
echo "Pass 'dev', 'stage' or 'prod' only."
fi
My main.tf is -
provider "aws" {
profile = "${var.profile}"
region = "${var.region}"
}
terraform {
backend "s3" {
}
}
And `backend.dev.conf is -
bucket = "bucket-name"
key = "mystate.tfstate"
region = "ap-south-1"
profile = "dev"
Also, my terraform.dev.tfvars is -
region = "ap-south-1"
profile = "dev"
These work perfectly with in my local unix (mac m1), but throws error in circleCI for backend. Yes, I've added environment variables with my aws_secret_access_key and aws_access_key_id, still it doesn't work.
I've seen so many tutorials and nothing seems to solve this, I don't want to write aws credentials in my code. any idea how I can solve this?
Update:
I have updated my pipeline to this -
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
# Checkout the code as the first step. This is a dedicated
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
aws-cli-cred-setup:
executor: aws-cli/default
steps:
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
terraform-setup:
executor: aws-cli/default
working_directory: ~/project
steps:
- checkout
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
context: terraform
# Invoke jobs via workflows
workflows:
dev_workflow:
jobs:
- build:
filters:
branches:
only: main
- aws-cli-cred-setup
# context: aws
- terraform-setup:
requires:
- aws-cli-cred-setup
But it still throws the same error.

You have probably added the aws_secret_access_key and aws_access_key_id to your project settings. But I don't see them being used in your pipeline configuration. You should do something like, so they are known during runtime:
version: 2.1
orbs:
python: circleci/python#1.5.0
jobs:
build:
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
environment:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
steps:
- run:
name: Check python version
command: python --version
...
I would advise you read about environment variables in the documentation.

Ok I managed to fix this issue. You have to remove profile from provider and other .tf files files.
So my main.tf file is -
provider "aws" {
region = "${var.region}"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
}
}
And backend.dev.conf is -
bucket = "bucket"
key = "dev/xxx.tfstate"
region = "ap-south-1"
And most importantly, You have to put acccess key, access key id and region inside circleci-> your project -> environment variable,
And you have to setup AWS CLI on circleci, apparently inside a job config.yml-
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
plan-apply:
executor: aws-cli/default
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
working_directory: ~/project
steps:
- checkout
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
- run:
name: Init infrastructure
command: sh scripts/init.sh dev
- run:
name: Plan infrastructure
command: sh scripts/plan.sh dev
- run:
name: Apply infrastructure
command: sh scripts/apply.sh dev
.....
.....
This solved the issue. But you have to init, plan and apply inside the job where you set up aws cli. I might be wrong to do setup and plan inside same job, but I'm learning now and this did the job. API changed and old tutorials don't work nowadays.
Comment me your suggestions if any.

Adding a profile to your backend will fix this issue. Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
bucket = "terraform-state"
region = "ap-south-1"
key = "dev/xxx.tfstate"
profile = "myAwsCliProfile"
}
}

Related

whitelist AWS RDS on CircleCI

I have a circleCI configuration to run my tests before merge to the master, I start my server to do my tests and the I should connect to my RDS database and its protected with security groups I tried to whitelist circleci ip to allow this happen but with no luck
version: 2.1
orbs:
aws-white-list-circleci-ip: configure/aws-white-list-circleci-ip#1.0.0
aws-cli: circleci/aws-cli#0.1.13
jobs:
aws_setup:
docker:
- image: cimg/python:3.11.0
steps:
- aws-cli/install
- aws-white-list-circleci-ip/add
build:
docker:
- image: cimg/node:18.4
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- run:
name: start the server
command: npm start
background: true
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
- aws-white-list-circleci-ip/remove
workflows:
build-workflow:
jobs:
- aws_setup:
context: aws_context
- build:
requires:
- aws_setup
context: aws_context
my context environment
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
AWS_SECRET_ACCESS_KEY
GROUPID
the error
the orbs I am using
https://circleci.com/developer/orbs/orb/configure/aws-white-list-circleci-ip
I figure it out
version: 2.1
orbs:
aws-cli: circleci/aws-cli#0.1.13
jobs:
build:
docker:
- image: cimg/python:3.11.0-node
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- aws-cli/install
- run:
command: |
public_ip_address=$(wget -qO- http://checkip.amazonaws.com)
echo "this computers public ip address is $public_ip_address"
aws ec2 authorize-security-group-ingress --region $AWS_DEFAULT_REGION --group-id $GROUPID --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 7000, \"IpRanges\": [{\"CidrIp\": \"${public_ip_address}/32\",\"Description\":\"CircleCi\"}]}]"
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
build-workflow:
jobs:
- build:
context: aws_context

No valid credential source for S3 backend found with GitHub OIDC

I am working with Github OIDC to login to AWS and Deploy our terraform code, I am stuck on terraform init, most of the solutions on the internet point towards deleting the credentials file or providing the credentials explicitly, I can't do any of those since the credentials file does not exist with OIDC and I don't want to explicitly provide the Access_key and Secret_ID explicitly in the backend moduel either since that could lead to a security risk, Here's my GitHub Deployment file:
name: AWS Terraform Plan & Deploy
on:
push:
paths:
- "infrastructure/**"
# branches-ignore:
# - '**'
pull_request:
env:
tf_actions_working_dir: infrastructure/env/dev-slb-alpha/dev
tf_actions_working_dir_prod: infrastructure/env/prod-slb-prod/prod
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
TF_WORKSPACE: "default"
TF_ACTION_COMMENT: 1
plan: "plan.tfplan"
BUCKET_NAME : "slb-dev-terraform-state"
AWS_REGION : "us-east-1"
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- run: sleep 5 # there's still a race condition for now
- name: Clone Repository (Latest)
uses: actions/checkout#v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: us-east-1
role-to-assume: arn:aws:iam::262267462662:role/slb-dev-github-actions-role
role-session-name: GithubActionsSession
# - name: Configure AWS
# run: |
# export AWS_ROLE_ARN=arn:aws:iam::262267462662:role/slb-dev-github-actions-role
# # export AWS_WEB_IDENTITY_TOKEN_FILE=/tmp/awscreds
# export AWS_DEFAULT_REGION=us-east-1
# # echo AWS_WEB_IDENTITY_TOKEN_FILE=$AWS_WEB_IDENTITY_TOKEN_FILE >> $GITHUB_ENV
# echo AWS_ROLE_ARN=$AWS_ROLE_ARN >> $GITHUB_ENV
# echo AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> $GITHUB_ENV
- run: aws sts get-caller-identity
setup:
runs-on: ubuntu-latest
environment:
name: Dev
url: https://dev.test.com
name: checkov-action-dev
steps:
- name: Checkout repo
uses: actions/checkout#master
with:
submodules: 'true'
# - name: Add Space to Dev
# run: |
# sysconfig -r proc exec_disable_arg_limit=1
# shell: bash
- name: Run Checkov action
run: |
pip3 install checkov
checkov --directory /infrastructure
id: checkov
# uses: bridgecrewio/checkov-action#master
# with:
# directory: infrastructure/
#skip_check: CKV_AWS_1
# quiet: true
# soft_fail: true
#framework: terraform
tfsec:
name: tfsec
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# - name: Terraform security scan
# uses: aquasecurity/tfsec-pr-commenter-action#v0.1.10
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: tfsec
uses: tfsec/tfsec-sarif-action#master
with:
# sarif_file: tfsec.sarif
github_token: ${{ secrets.INPUT_GITHUB_TOKEN }}
# - name: Upload SARIF file
# uses: github/codeql-action/upload-sarif#v1
# with:
# sarif_file: tfsec.sarif
superlinter:
name: superlinter
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Scan Code Base
# uses: github/super-linter#v4
# env:
# VALIDATE_ALL_CODEBASE: false
# # DEFAULT_BRANCH: master
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# VALIDATE_TERRAFORM_TERRASCAN: false
uses: terraform-linters/setup-tflint#v1
with:
tflint_version: v0.29.0
terrascan:
name: terrascan
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Run Terrascan
id: terrascan
uses: accurics/terrascan-action#v1
with:
iac_type: "terraform"
iac_version: "v15"
policy_type: "aws"
only_warn: true
#iac_dir:
#policy_path:
#skip_rules:
#config_path:
terraform:
defaults:
run:
working-directory: ${{ env.tf_actions_working_dir}}
name: "Terraform"
runs-on: ubuntu-latest
needs: build
steps:
- name: Clone Repository (Latest)
uses: actions/checkout#v2
if: github.event.inputs.git-ref == ''
- name: Clone Repository (Custom Ref)
uses: actions/checkout#v2
if: github.event.inputs.git-ref != ''
with:
ref: ${{ github.event.inputs.git-ref }}
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
terraform_version: 1.1.2
- name: Terraform Format
id: fmt
run: terraform fmt -check
- name: Terraform Init
id: init
run: |
# # cat ~/.aws/crendentials
# # export AWS_PROFILE=pki-aws-informatics
# aws configure list-profiles
#terraform init -backend-config="bucket=slb-dev-terraform-state"
terraform init -backend-config="access_key=${{ env.AWS_ACCESS_KEY_ID}}" -backend-config="secret_key=${{ env.AWS_SECRET_ACCESS_KEY}}"
terraform init --backend-config="access_key=${{ env.AWS_ACCESS_KEY_ID}}" --backend-config="secret_key=${{ env.AWS_SECRET_ACCESS_KEY}}"
- name: Terraform Validate
id: validate
run: terraform validate -no-color
- name: Terraform Plan
id: plan
run: terraform plan -var-file="terraform.tfvars" -out=${{ env.plan }}
- uses: actions/github-script#0.9.0
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
github-token: ${{ secrets.INPUT_GITHUB_TOKEN }}
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Validation 🤖${{ steps.validate.outputs.stdout }}
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`${process.env.PLAN}\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ env.tf_actions_working_dir }}\`, Workflow: \`${{ github.workflow }}\`*`;
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
As you can see I have tried it a couple of ways and still end up with the same error, which is , I have made sure that the profile we using is correct,I also cannot proivde credentials in the init command itself,it is validating to the correct profile since it is fetching the correct arn for the profile I need it to work on, I also read somewhere that the credentials for aws profiles and S3 could be different and if that is the case how can I integrate OIDC in ythat project, not sure what or where I might be going wrong otherwise, appreciate any help or headers,
I can't give advice specific to Github (since I'm using Bitbucket), but if you're using OIDC for access to AWS from your SCM of choice the same principals apply. The S3 backend for Terraform itself doesn't allow specifying any of the normal configuration for OIDC, but you can set this with environment variables and have it work:
AWS_WEB_IDENTITY_TOKEN_FILE=<web-identity-token-file>
AWS_ROLE_ARN=arn:aws:iam::<account-id>:role/<role-name>
For Bitbucket Pipelines users:
Specify oidc: true in your pipelines config
Write the OIDC token file using e.g. echo $BITBUCKET_STEP_OIDC_TOKEN > $(pwd)/web-identity-token
Export the environment variables as above
I've split my S3 backend storage away from the account that has resources, so will need to look at configuring the actual AWS provider separately - it does have options for assume_role.web_identity_token_file and assume_role.role_arn

Github (On premise self hosted): Could not create directory [/home/master/.config/gcloud/logs/2022.02.10]

I am trying to build a docker image and pushing it to gcp artifactory. But it is failing in the github actions. Here is my workflow yaml file:
on:
push:
branches:
- main
- featurev1
name: Build and Deploy to Cloud Run
env:
REGION: 'europe-west1'
PROJECT_ID: 'myproject'
CLUSTER_NAME: 'myproject-cluster'
LOCATION: 'europe-west1'
ZONE: 'europe-west1'
ARTIFACT_REGISTRY: 'myproject-cust-seg'
TARGET_ENV: 'INT'
NAMESPACE: 'integration'
jobs:
deploy:
runs-on: [ self-hosted ]
# Add "id-token" with the intended permissions.
#permissions:
# contents: 'read'
# id-token: 'write'
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup gcloud environment
uses: google-github-actions/setup-gcloud#v0
with:
service_account_key: ${{ secrets.INT_PLATFORM_SERVICE_ACCOUNT_KEY }}
project_id: ${{ env.PROJECT_ID }}
# Alternative option - authentication via credentials json
#- id: 'auth'
# uses: 'google-github-actions/auth#v0'
# with:
# credentials_json: ${{ secrets.INT_PLATFORM_SERVICE_ACCOUNT_KEY }}
- name: Authorize Docker push
run: gcloud auth configure-docker
- name: Build and Push Container
env:
GIT_TAG: ${{ github.run_id }}
run: |-
docker build -t $LOCATION-docker.pkg.dev/$PROJECT_ID/$ARTIFACT_REGISTRY/custapi:$TARGET_ENV-v$GIT_TAG .
docker push $LOCATION-docker.pkg.dev/$PROJECT_ID/$ARTIFACT_REGISTRY/custapi:$TARGET_ENV-v$GIT_TAG
But I have an error:
Run google-github-actions/setup-gcloud#v0
Error: google-github-actions/setup-gcloud failed with: failed to execute command gcloud --quiet config set project myproject: WARNING: Could not setup log file in /home/master/.config/gcloud/logs, (Could not create directory [/home/master/.config/gcloud/logs/2022.02.10]: Permission denied.
Please verify that you have permissions to write to the parent directory..
The configuration directory may not be writable. To learn more, see https://cloud.google.com/sdk/docs/configurations#creating_a_configuration
ERROR: (gcloud.config.set) Failed to create the default configuration. Ensure your have the correct permissions on: [/home/master/.config/gcloud/configurations].
Could not create directory [/home/master/.config/gcloud/configurations]: Permission denied.
Please verify that you have permission to write to the parent directory.
Right now I have used the service key json file as secret in GitHub actions as keyless authentication will be done in the near future, after the successful pilot of phase 1. So you could find the details above.
Here I have mentioned runs-on as "self-hosted" which is our onpremise github action runner.

Assigning the value of a gcloud call to a GitHub actions variable

Objective: To grab the JOB_ID of a unknown running pipeline using gcloud and assign it to a variable to use later for when I drain the pipeline.
run: 'gcloud dataflow jobs list --region us-central1 --status active --filter DataflowToBigtable --format="value(JOB_ID.scope()):"'.
-- this will output something like : DataflowToBigtable0283384848 which is the JOB_ID I want to use. I don't know this value at start and can't assign it from secret. So if my action looks like this....
name: PI-Engine-Deploy
on:
push:
branches: [ develop, feature/deployment-workflow ]
env:
BUCKET_NAME: ret-dev-dataflow
PROJECT_ID: ret-01-dev
REGION: us-central1
PUBSUB_ID: steps.pubsub.outputs.JOB_ID. // I want to assign value here.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v2
with:
java-version: '11'
distribution: 'adopt'
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#master
with:
project_id: ${{ secrets.GCP_PROJECT_ID }}
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Use gcloud CLI
run: gcloud info
- name: Install Maven
run: mvn install
## Gets the id for pubsub pipeline
- name: Get the targeted pipelines
id: pubsub
run: 'gcloud dataflow jobs list --region us-central1 --status active --filter DataflowToBigtable --format="value(JOB_ID.scope())"'
- name: Drain Pubsub
run: 'gcloud dataflow jobs drain ${{ PUBSUB_ID }}' ## I want to use the assingned value here.

How to run my jenkins pipeline code in AWS CodeBuild?

I can trigger my AWS pipeline from jenkins but I don't want to create buildspec.yaml and instead use the pipeline script which already works for jenkins.
In order to user Codebuild you need to provide the Codebuild project with a buildspec.yaml file along with your source code or incorporate the commands into the actual project.
However, I think you are interested in having the creation of the buildspec.yaml file done within the Jenkins pipeline.
Below is a snippet of a stage within a Jenkinsfile, it creates a build spec file for building docker images and then sends the contents of the workspace to a codebuild project. This uses the plugin for Codebuild.
stage('Build - Non Prod'){
String nonProductionBuildSpec = """
version: 0.1
phases:
pre_build:
commands:
- \$(aws ecr get-login --registry-ids <number> --region us-east-1)
build:
commands:
- docker build -t ces-sample-docker .
- docker tag $NAME:$TAG <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
post_build:
commands:
- docker push <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
""".replace("\t"," ")
writeFile file: 'buildspec.yml', text: nonProductionBuildSpec
//Send checked out files to AWS
awsCodeBuild projectName: "<codebuild-projectname>",region: "us-east-1", sourceControlType: "jenkins"
}
I hope this gives you an idea of whats possible.
Good luck!
Patrick
You will need to write a buildspec for the commands that you want AWS CodeBuild to run. If you use the CodeBuild plugin for Jenkins, you can add that to your Jenkins pipeline and use CodeBuild as a Jenkins build slave to execute the commands in your buildspec.
See more details here: https://docs.aws.amazon.com/codebuild/latest/userguide/jenkins-plugin.html
#hynespm - excellent example mate.
Here is another one based off yours but with stripIndent() and "withAWS" to switch roles:
#!/usr/bin/env groovy
def cbResult = null
pipeline {
.
.
.
script {
echo ("app_version TestwithAWS value : " + "${app_version}")
String buildspec = """\
version: 0.2
env:
parameter-store:
TOKEN: /some/token
phases:
pre_build:
commands:
- echo "List files...."
- ls -l
- echo "TOKEN is ':' \${TOKEN}"
build:
commands:
- echo "build':' Do something here..."
- echo "\${CODEBUILD_SRC_DIR}"
- ls -l "\${CODEBUILD_SRC_DIR}"
post_build:
commands:
- pwd
- echo "postbuild':' Done..."
""".stripIndent()
withAWS(region: 'ap-southeast-2', role: 'CodeBuildWithJenkinsRole', roleAccount: '123456789123', externalId: '123456-2c1a-4367-aa09-7654321') {
sh 'aws ssm get-parameter --name "/some/token"'
try {
cbResult = awsCodeBuild projectName: 'project-lambda',
sourceControlType: 'project',
credentialsType: 'keys',
awsAccessKey: env.AWS_ACCESS_KEY_ID,
awsSecretKey: env.AWS_SECRET_ACCESS_KEY,
awsSessionToken: env.AWS_SESSION_TOKEN,
region: 'ap-southeast-2',
envVariables: '[ { GITHUB_OWNER, special }, { GITHUB_REPO, project-lambda } ]',
artifactTypeOverride: 'S3',
artifactLocationOverride: 'special-artifacts',
overrideArtifactName: 'True',
buildSpecFile: buildspec
} catch (Exception cbEx) {
cbResult = cbEx.getCodeBuildResult()
}
}
} //script
.
.
.
}