Amplify Build using Secrets Manager - amazon-web-services

I am trying to access my Secret Manager values as environment variables in the build of my Amplify application.
I have followed AWS documentation and community threads/videos.
I have included in my build spec file, amplify.yml, as below per the guide: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-versions
version: 1
env:
secrets-manager:
TOKEN: mySecret:myKey
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- echo "$TOKEN"
- yarn run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- node_modules/**/*
- .next/cache/**/*
I have attached Secret Manager access policies to my Amplify service role per community threads and this YouTube video:
https://youtu.be/jSY7Xerc8-s
However, echo $TOKEN returns blank
Is there no way to access Secret Manager key-values in the Amplify build settings (https://docs.aws.amazon.com/amplify/latest/userguide/build-settings.html) like you can just the same in Code Build (see above guide)?
So far I have only been able to store my sensitive enviroment variables with Parameter Store (following this guide: https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html) but from my understanding does seem secure as the values are displayed when echo is used, which will be exposed during logs, whereas values from Secret Manager with be censored out as '***'.

Related

How to setup terraform cicd with gcp and github actions in a multidirectory repository

Introduction
I have a repository with all the infrastructure defined using IaC, separated in folders. For instance, all terraform configuration is in /terraform/. I want to apply all terraform files inside that directory from the CI/CD.
Configuration
The used github action is shown below:
name: 'Terraform'
on: [push]
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash
#working-directory: terraform
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
- id: 'auth'
uses: 'google-github-actions/auth#v1'
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud#v1'
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check
# On push to "master", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
run: terraform apply -auto-approve -input=false
Problem
If I log in and then change directory to apply terraform it doesn't find to log in.
storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
On the other hand, if I don't change the directory then it doesn't find the configuration files as expected.
Error: No configuration files
Tried to move the terraform configuration files to the root of the repository and works. How could I implement it in a multidirectory repository?
Such feature was requested before. As explained in the issue, auth files is named as follows gha-creds-*.json.
Therefore, added a step just before using terraform to update the variable environment and moving the file itself:
- name: 'Setup google auth in multidirectory repo'
run: |
echo "GOOGLE_APPLICATION_CREDENTIALS=$GITHUB_WORKSPACE/terraform/`ls -1 $GOOGLE_APPLICATION_CREDENTIALS | xargs basename`" >> $GITHUB_ENV
mv $GITHUB_WORKSPACE/gha-creds-*.json $GITHUB_WORKSPACE/terraform/

How to access API secrets from Next.js in AWS Amplify

I am very confused regarding how to set and access API secrets in a Next.js app within an AWS Amplify project.
The scenario is: I have a private API key that fetches data from an API. Obviously, this is a secret key and I don't want to share it in my github repo or the browser. I create a .env.local file and place my secret there.
API_KEY="qwerty123"
I am able to access this key in my code through using process.env.API_KEY
Here is an example fetch request with that API Key: https://developer.nps.gov/api/v1/parks?${parkCode}&api_key=${process.env.API_KEY}
This works perfectly when I run yarn dev and yarn build -> yarn start
This is the message I get when I run yarn start
next start
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from /Users/tmo/Desktop/Code/projects/visit-national-parks/.env.local
The env is loaded and able to be called on my local machine.
However,
When I push this code to github and start the Build process in AWS Amplify, the app builds, but the API fetch calls do not work. I get a ````500 Server Error`````
This is what I have done to try and solve this issue:
Added my API_KEY in the Environment variables tab in Amplify
2. Update my Build settings
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- API_KEY=${API_KEY} '#Added my API_KEY from the environment variables tab in Amplify`
- yarn run build
I am not sure what else to do. After building the app again, I still get 500 server error
Here is the live amplify app with the server error.
We're working on something similar right now. Our dev designed it so it reads an .env file.
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- echo API_KEY=$API_KEY >.env
- echo OTHERKEY=$OTHER_KEY >> .env
- yarn run build
We were able to pick it up and pass it to AWS' DynamoDB Client SDK.
Not sure if it's your call or not, but yarn can be fickle in our Amplify projects sometimes, so we usually resort to using npm if it starts acting up.

Azure DevOps YAML self hosted agent pipeline build is stuck at locating self-agent

Action: I tried to configure and run a simple c++ azure pipeline on a self-hosted windows computer. I'm pretty new to all this. I ran the script below.
Expected: to see build task, display task and clean task. to see hello word.
Result: Error, script can't find my build agent.
##[warning]An image label with the label Weltgeist does not exist.
,##[error]The remote provider was unable to process the request.
Pool: Azure Pipelines
Image: Weltgeist
Started: Today at 10:16 p.m.
Duration: 14m 23s
Info & Test:
My self-hosted agent name is Weltgeist and it's part of the
default agent pools.it's a windows computer, with all g++, mingw and
other related tools on it.
I tried my build task locally with no problem.
I tried my build task using azure 'ubuntu-latest' agent with no
problem.
I created the self-hosted agent following these specification.
I'm the owner of the azure repo.
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops
How do I configure correctly the pool ymal parameter for self-hosted agent ?
Do i have addition steps to do server side? or on azure repo configs?
Any other idea of what went wrong?
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: 'Weltgeist' #Testing with self-hosted agent
steps:
- script: |
mkdir ./build
g++ -g ./src/hello-world.cpp -o ./build/hello-world.exe
displayName: 'Run a build script'
- script: |
./build/hello-world.exe
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
(UPDATE)
Solution:
Thx, after updating it as stated in a answer below and reading a bit more pool ymal definition it works. Note, I modified a couple of other lines to make it work on my environment.
trigger:
- master
pool:
name: Default
demands:
- agent.name -equals Weltgeist
steps:
- script: |
mkdir build
g++ -o ./build/hello-world.exe ./src/hello-world.cpp
displayName: 'Run a build script'
- script: |
cd build
hello-world.exe
cd ..
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
I was confused by the Default because there was already a pipeline named Default in the organization.
Expanding on the answers provided here.
pool:
name: NameOfYourPool
demands:
- agent.name -equals NameOfYourAgent
Here is screen you'll find that information in DevOps.
Since you are using the self-hosted agent, you could use the following format:
pool:
name: Default
demands:
- agent.name -equals Weltgeist
Then it should work as expected.
You could refer to the doc about POOL Definition in Yaml.
I had faced the same issue and replacing vmImage under pool with "name" worked for me. PFB,
trigger:
master
pool:
name: 'Weltgeist' #Testing with self-hosted agent
Also be aware that if your agent only appears in the "Azure Pipelines" pool and not in any of the other pools then the agent may have been configured to be an "Environment" resource, and can't be used as part of the build step.
I spent ages trying to use a self-hosted VM for a build step, thinking that the correct way to reference the VM was by creating a VM resource from the Pipelines > Enviroments area:
The agent would be properly created and visible in the "Azure Pipelines" pool, but wouldn't be available in any of the other pools, which then meant it couldn't be referenced in the yaml used for setting the server used for builds.
I was able to resolve the issue, by de-registering the agent on my self-hosted VM with .\config.cmd remove and running ./config without the --environment --environmentname "<name>" that was provided within the registration script mentioned above (shown in the "Add reseouce" screenshot)
Oddly, the registration script is a much quicker way to register an Agent than the "New agent" form shown in Agent Pools:
The necessary files are pulled to the server (without having to download one first) and a PAT with a 3-hour lifetime is auto-generated.

is it wrong/dangerous to include aws-exports.js file in source control?

amplify auto-ignores aws-exports.js in .gitignore possibly simply because it may change frequently and is fully generated - however maybe there are also security concerns?
For this project my github project is private so that is not a concern, but I am wondering for future projects that could be public.
The reason I ask is because if I want to run my app setup/build/test through github workflows then I need this file for the build to complete properly on github machines?
Also I appear to need it for my amplify CI hosting to work on amplify console (I have connected my amplify console build->deploy to my github master branch and it all works perfectly but only when aws-exports.js is in source control).
Here is my amplify.yml, I am using reason-react with nextjs, and my amplify console is telling me I have connected to the correct backend:
version: 1
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn run build
artifacts:
baseDirectory: out
files:
- '**/*'
cache:
paths:
- node_modules/**/*

How to get un-versioned files included while deploying serverless apps via github actions?

I have a serverless project which requires SSL signed cert/private key for communication to an API. The cert/key aren't in version control, but locally are in my file system. The files get bundled with the lambdas in the service and are accessible for use when deployed.
package:
individually: true
include:
- signed-cert.pem
- private-key.pem
Deployment is done via Github Actions.
e.g. npm install serverless ... npx serverless deploy
How could those files be included without adding them to version control? Could they be retrieved from S3? Some other way?
It looks like encrypting the files may work, but is there a better approach? The lambdas could fetch them from S3, but I'd rather avoid additional latency on every startup if possible.
Looks like adding a GitHub secret for the private key and certificate works. Just paste the cert/private key text into a GitHub secret e.g.
Secret: SIGNED_CERT, Value: -----BEGIN CERTIFICATE-----......-----END CERTIFICATE-----
Then in the GitHub Action Workflow:
- name: create ssl signed certificate
run: 'echo "$SIGNED_CERT" > signedcert.pem'
shell: bash
env:
SIGNED_CERT: ${{secrets.SIGNED_CERT}}
working-directory: serverless/myservice
- name: create ssl private key
run: 'echo "$PRIVATE_KEY" > private-key.pem'
shell: bash
env:
PRIVATE_KEY: ${{secrets.PRIVATE_KEY}}
working-directory: serverless/myservice
Working directory if the serverless.yml isn't at that root level of the project.