How to configure GithubAction to assume role to access my CodeBuild Project - amazon-web-services

I am using GithHub Actions with CodeBuild,when I run GitHub Actions,I am getting error message CodeBuild project name can not be found.The issue is that my codebuild project is in my assumed role(sandbox_role) but github action is looking for the project in the root account that i configured as environment variable in github secret.How can I configure my GitHub Action workflows to first connect to the root then from there assume sandbox_role to get my codebuild project?below is my code sample..I am using terragrunt/terraform code to provision my environment
name:'GitHub Actions For CodeBuild'
on:
pull_request:
branches:
- staging
jobs:
CodeBuild:
name:'Build'
runs-on:ubuntu-latest
steps:
-name:'checkout'
uses:actions/checkout#v2
-name:configure AWS credentials
uses:aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{secrets.AWS_ACCESS_KEY_ID}}
aws-secret-access-key: ${{secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
-name:Run CodeBuild
uses: aws-actions/aws-codebuild-run-build#v1
with:
project-name: CodeBuild
buildspec-override: staging/buildspec.yml
env-vars-for-codebuild: |
TF_INPUT,
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY,
AWS_REGION,
env:
TF_INPUT: false
AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}}
AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: us-east-1

Not pretty sure if this works, but whenever I use roles I also pass the role ARN "to tell" AWS which is the role you are using and which permissions it should have.
This role ARN can be added in the configuration and credentials file:
Configuration:
region = us-east-1
output = json
role_arn = arn:aws:iam::account_id:role/role-name
source_profile=default
Credentials:
[default]
aws_access_key_id="your_key_id"
aws_secret_access_key="your_access_key"
aws_session_token="your_session_token"
source_profile=default

Related

Github Actions aws-actions/configure-aws-credentials with multiple accounts for deploy and state backend

I'm trying to setup a CI pipeline on Github Actions that could support multiple aws accounts.
I have a "dev" account for deploying all the dev infrastructure and an "admin" account in which we manage terraform state (in an S3 bucket) for multiple projects, including this one.
So when deploying with CDKTF I must access with two accounts: one for the aws provider, that will perform the deploy, and another one to access the bucket holding the state.
I created two roles in the two accounts separately as explained on the aws-actions/configure-aws-credentials repository and configured the CI like this
- name: Configure AWS Credentials for Dev
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: $AWS_REGION
role-to-assume: ${{ secrets.CI_ROLE_ARN_DEV }}
role-session-name: dev-session
- name: Configure AWS Credentials for Admin
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: $AWS_REGION
role-to-assume: ${{ secrets.CI_ROLE_ARN_ADMIN }}
role-session-name: admin-session
But when running cdktf deploy I get the error failed to get shared config profile, dev where dev is the name of the profile specified in the AwsProvider like this
const provider = new AwsProvider(this, 'aws-provider', {
region: awsRegion,
profile: 'dev'
});
In fact, when executing aws sts get-caller-identity in GH Actions it outputs like it's logged as the admin account, becuse it's the last one I configured.
I don't know how I can tell Github to manage both accounts at the same time.
Keep in mind: when deploying locally, if I login via sso (aws sso login --profile <profile-name>) with both profiles, I can deploy everything with no problem at all.
Following another stackoverflow question (that I can't find right now), I tried configuring the profiles like this directly in the CI
- name: Configure aws credentials
run: |
aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID_DEV }} --profile dev
aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY_DEV }} --profile dev
aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID_ADMIN }} --profile admin
aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY_ADMIN }} admin
cat "$AWS_SHARED_CREDENTIALS_FILE"
and despite it cating the right information, when cdktf deploying I get the following
error configuring S3 Backend: no valid credential sources for S3 Backend found.
Also when calling aws sts get-caller-identity on both profiles i always get
An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid.

How can i configure my aws credentials in shared credentials file for github action

I am trying to deploy the ci/cd pipeline for ECR in AWS.
It will push/pull the image from ECR
We are trying to migrate the azure pipeline to GitHub actions pipeline
When I try to implement the pipeline I am facing the below error,
[05:25:00] CredentialsProviderError: Profile Pinz could not be found or parsed in shared credentials file.
at resolveProfileData (/home/runner/work/test-api/test-api/node_modules/#aws-sdk/credential-provider-ini/dist-cjs/resolveProfileData.js:26:11)
at /home/runner/work/test-api/test-api/node_modules/#aws-sdk/credential-provider-ini/dist-cjs/fromIni.js:8:56
at async loadFromProfile (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/aws/GetCredentialsFromProfile.js:23:25)
at async BuildDeployContext (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/DeployContext.js:95:70)
at async Publish (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/Publish.js:14:21)
Error: Process completed with exit code 1.
Here is my workflow YAML file,
on:
push:
branches: [ main ]
name: Node Project `my-app` CI on ECRjobs
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node 14.17.X
uses: actions/setup-node#v2
with:
node-version: 14.17.X
- name: 'Yarn'
uses: borales/actions-yarn#v2.3.0
with:
cmd: install --frozen-lockfile --non-interactive
- name: Update SAM version
uses: aws-actions/setup-sam#v1
- run: |
wget https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip
unzip aws-sam-cli-linux-x86_64.zip -d sam-installation
sudo ./sam-installation/install --update
sam --version
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push the image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: test-pinz-api
IMAGE_TAG: latest
run: |
gulp publish --profile-name development
Using gulp we publish the environment using below config file,
{
"apiDomainName": "domain",
"assetsDomainName": "domain",
"awsProfile": "Pinz",
"bastionBucket": "bucketname",
"corsDomains": ["domain"],
"dbBackupSources": ["db source", "db source"],
"dbClusterIdentifier": "cluster identfier",
"designDomainName": "domain",
"lambdaEcr": "ecr",
"snsApplication": "sns",
"snsServerKeySecretName": "name",
"stackName": "name",
"templateBucket": "bucketname",
"userJwtPublicKey": "token",
"websiteUrl": "domain",
"wwwDomainName": "domain",
"wwwEcr": "ecr repo"
}
I couldn't find the shared credential file where the AWS credentials are saved.
I don't have any idea where the below profile configured
"awsProfile": "Pinz"
I analyzed all project files but couldn't get the shared credentials
I analyzed this in many documents and ended up with some nearer answers but couldn't get the exact answer. below it says ~/.aws/credentials. but how does above JSON file get the credentials from there?
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-shared.html
Honestly, this ECR pipeline deployment is my first time. Also, I didn't get proper KT about the process as well.
I think am almost done on this but for gulp it shows this error
Can anyone please guide me to where will be this shared credentials file? If not how can I configure the AWS credentials to authenticate with AWS?
Your gulp file has the profile set to Pinz, remove this line completely.
{
...
"awsProfile": "Pinz",
...
}
The action will automatically pick up on your access key ID & secret access key, proceeding to then exporting them as environment variables that the AWS SDK can use.
The rest of the pipeline should pick up on the configured credentials automatically.

How to configure / use AWS CLI in GitHub Actions?

I'd like to run commands like aws amplify start-job in GitHub Actions. I understand the AWS CLI is pre-installed but I'm not sure how to configure it.
In particular, I'm not sure how the env vars are named for all configuration options as some docs only mention AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY but nothing for the region and output settings.
I recommend using this AWS action for setting all the AWS region and credentials environment variables in the GitHub Actions environment. It doesn't set the output env vars so you still need to do that, but it has nice features around making sure that credential env vars are masked in the output as secrets, supports assuming a role, and provides your account ID if you need it in other actions.
https://github.com/marketplace/actions/configure-aws-credentials-action-for-github-actions
I could provide the following secrets and env vars and then use the commands:
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
E.g.
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: eu-west-1
AWS_DEFAULT_OUTPUT: json
run: aws amplify start-job --app-id xxx --branch-name master --job-type RELEASE
In my experience, the out-of-box AWS CLI tool coming from action runner just works fine.
But there would be some time that you'd prefer to use credentials file (like terraform AWS provider), and this is example for it.
This would base64 decode the encoded file and use for the following steps.
- name: Write into file
id: write_file
uses: timheuer/base64-to-file#v1.0.3
with:
fileName: 'myTemporaryFile.txt'
encodedString: ${{ secrets.AWS_CREDENTIALS_FILE_BASE64 }}

Using Gitihub actions for CI CD on aws ec2 machine?

i am new github actions workflow and was wondering that is it possible that i set my ec2 machine directly for CI and CD after every push.
I have seen that it is possible with ECS , but i wanted a straight forward solution as we are trying this out on our Dev environment we don't want over shoot our budget.
is it possible , if yes how can i achieve it ?
If you build your code in GitHub Actions, and just want to copy the package over existing EC2, you can use SCP files action plugin
https://github.com/marketplace/actions/scp-files
- name: copy file via ssh key
uses: appleboy/scp-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
port: ${{ secrets.PORT }}
key: ${{ secrets.KEY }}
source: "tests/a.txt,tests/b.txt"
target: "test"
If you have any other AWS resource which interacts with EC2 (or any other AWS service) and you want to use AWS CLI, you can use AWS Credentials Action
https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.TEST_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.TEST_AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Copy files to the test website with the AWS CLI
run: |
aws s3 sync . s3://my-s3-test-website-bucket
Here there is a nice article. The goal of article is to build a CI/CD stack with Github Actions + AWS EC2, CodeDeploy and S3.

AWS Lambda Serverless deploy asking for AWS provider credentials even after adding the credentials

I am trying to deploy my lambda function using the below command
serverless deploy
But I get this error
ServerlessError: AWS provider credentials not found.
But I have configured AWS credentials using the below command
serverless config credentials --provider aws --key AKINA4KOZEU44A --secret X0DGlLin/Gn89GPIAOLr8gnwZWAWLCv+ --profile serverless-admin
and still getting the same error.
I resolved it by adding profile in serverless.yml
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
profile: serverless-admin