I have a CodeBuild project that pulls an image from a public Docker repository. I'm running into the known issue of too many pulls, so I want to login to Docker and pull the image because I have a valid Docker license.
However, I can't seem to find any documentation on how to set my credentials in CodeBuild. The only examples I see, are logging in via the buildspec.yml and then pulling the docker image. This does not work for me because I'm setting the docker image in the CodeBuild configuration.
I'm using CDK and this is my current CodeBuild configuration:
const myCodeBuild = new codeBuild.Project(this, 'myCodeBuild', {
source: githubsrc,
secondarySources: [ githubsrc2 ],
role: new BuildRole(this, 'myCodeBuildRole').role,
buildSpec: codeBuild.BuildSpec.fromObject(buildSpec),
environment: {
buildImage: codeBuild.LinuxBuildImage.fromDockerRegistry('salesforce/salesforcedx:latest-rc-full'
},
});
This creates a CodeBuild project that will automatically use the provided Docker Image. There is never a chance to login before it is pulled.
fromDockerRegistry supports authentication. To use it, create a Secrets Manager secret that contains the username and password fields with your Docker Hub credentials and pass it to the function. (Documentation reference for the secret format)
Using the example from the docs:
environment: {
buildImage: codebuild.LinuxBuildImage.fromDockerRegistry('my-registry/my-repo', {
secretsManagerCredentials: secrets,
}),
},
secrets is your Secrets Manager secret here.
Related
I uses CDK to create and deploy pipeline(with AWS CodePipeline https://aws.amazon.com/codepipeline/), and today Source stage stopped working, complaining about insufficient permission, which turns out to be that the Github token is expired. Error:
"Could not access the GitHub repository: "pandaWebsite". The access token might be invalid or has been revoked. Edit the pipeline to reconnect with GitHub."
So I re-genereate the Github token, and updated it in AWS Secrets Manager. And click "Retry" button in pipeline, and it still failed. Eventually I have to run cdk destroy to destroy the pipeline, and run cdk deploy to re-deploy the pipeline, and then it works.
My question is, why I have to destroy and re-deploy the pipeline? I was expecting that once I updated the token in Secrets Manager, it should just work.
More context
AWS Secrets Manager is where I stored the Github token, and my CDK code fetch from it. See code here:
// Add Source stage to fetch code from GitHub repository.
private addSourceStage(
pipeline: codepipeline.Pipeline,
sourceCode: codepipeline.Artifact
) {
pipeline.addStage({
stageName: "Source",
actions: [
new codepipeline_actions.GitHubSourceAction({
actionName: "Checkout",
owner: "yangliu",
repo: "pandaWebsite",
branch: "main",
// read the value from Secrets Manager
oauthToken: CDK.SecretValue.secretsManager(
"github-token"
),
output: sourceCode,
trigger: codepipeline_actions.GitHubTrigger.WEBHOOK,
}),
],
});
}
I'd like to be able to use GitHub Actions to be able to deploy resources with AWS, but without using a hard-coded user.
I know that it's possible to create an IAM user with a fixed credential, and that can be exported to GitHub Secrets, but this means if the key ever leaks I have a large problem on my hands, and rotating such keys are challenging if forgotten.
Is there any way that I can enable a password-less authentication flow for deploying code to AWS?
Yes, it is possible now that GitHub have released their Open ID Connector for use with GitHub Actions. You can configure the Open ID Connector as an Identity Provider in AWS, and then use that for an access point to any role(s) that you wish to enable. You can then configure the action to use the credentials acquired for the duration of the job, and when the job is complete, the credentials are automatically revoked.
To set this up in AWS, you need to create an Open Identity Connect Provider using the instructions at AWS or using a Terraform file similar to the following:
resource "aws_iam_openid_connect_provider" "github" {
url = "https://token.actions.githubusercontent.com"
client_id_list = [
// original value "sigstore",
"sts.amazonaws.com", // Used by aws-actions/configure-aws-credentials
]
thumbprint_list = [
// original value "a031c46782e6e6c662c2c87c76da9aa62ccabd8e",
"6938fd4d98bab03faadb97b34396831e3780aea1",
]
}
The client id list is the 'audience' which is used to access this content -- you can vary that, provided that you vary it in all the right places. The thumbprint is a hash/certificate of the Open ID Connector, and 6938...aea1 is the current one used by GitHub Actions -- you can calculate/verify the value by following AWS' instructions. The thumbprint_list can hold up to 5 values, so newer versions can be appended when they are being made available ahead of time while continuing to use older ones.
If you're interested in where this magic value came from, you can find out at How can I calculate the thumbprint of an OpenID Connect server?
Once you have enabled the identity provider, you can use it to create one or more custom roles (replacing with :
data "aws_caller_identity" "current" {}
resource "aws_iam_role" "github_alblue" {
name = "GitHubAlBlue"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Action = "sts:AssumeRoleWithWebIdentity"
Effect = "Allow"
Principal = {
Federated = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/token.actions.githubusercontent.com"
}
Condition = {
StringLike = {
"token.actions.githubusercontent.com:aud" : ["sts.amazonaws.com" ],
"token.actions.githubusercontent.com:sub" : "repo:alblue/*"
}
}
}]
})
}
You can create as many different roles as you need, and even split them up by audience (e.g. 'production', 'dev'). Provided that the OpenID connector's audience is trusted by the account then you're good to go. (You can use this to ensure that the OpenID Connector in a Dev account doesn't trust the roles in a production account and vice-versa.) You can have, for example, a read-only role for performing terraform validate and then another role for terraform apply.
The subject is passed from GitHub, but looks like:
repo:<organization>/<repository>:ref:refs/heads/<branch>
There may be different formats that come out later. You could have an action/role specifically for PRs if you use :ref:refs/pulls/* for example, and have another role for :ref:refs/heads/production/*.
The final step is getting your GitHub Actions configured to use the token that comes back from the AWS/OpenID Connect:
Standard Way
jobs:
terraform-validate:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#master
with:
role-to-assume: arn:aws:iam::<accountid>:role/GitHubAlBlue
aws-region: us-east-1
- name: Display Identity
run: aws sts get-caller-identity
Manual Way
What's actually happening under the covers is something like this:
jobs:
terraform-validate:
runs-on: ubuntu-latest
env:
AWS_WEB_IDENTITY_TOKEN_FILE: .git/aws.web.identity.token.file
AWS_DEFAULT_REGION: eu-west-2
AWS_ROLE_ARN: arn:aws:iam::<accountid>:role/GitHubAlBlue
permissions:
id-token: write
contents: read
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS
run: |
sleep 3 # Need to have a delay to acquire this
curl -H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
"$ACTIONS_ID_TOKEN_REQUEST_URL&audience=sts.amazonaws.com" \
| jq -r '.value' > $AWS_WEB_IDENTITY_TOKEN_FILE
aws sts get-caller-identity
You need to ensure that your AWS_ROLE_ARN is the same as defined in your AWS account, and that the audience is the same as accepted by the OpenID Connect and the Role name as well.
Essentially, there's a race condition between the job starting, and the token's validity which doesn't come in until after GitHub has confirmed the job has started; if the size of the AWS_WEB_IDENITY_TOKEN_FILE is less than 10 chars, it's probably an error and sleep/spinning will get you the value afterwards.
The name of the AWS_WEB_IDENTITY_TOKEN_FILE doesn't really matter, so long as it's consistent. If you're using docker containers, then storing it in e.g. /tmp will mean that it's not available in any running containers. If you put it under .git in the workspace, then not only will git ignore it (if you're doing any hash calculations) but it will also be present in any other docker run actions that you do later on.
You might want to configure your role so that the validity of the period used is limited; once you have the web token it's valid until the end of the job, but the token requested has a lifetime of 15 minutes, so it's possible for a longer-running job to expose that.
It's likely that GitHub will have a blog post on how to configure/use this in the near future. The above information was inspired by https://awsteele.com/blog/2021/09/15/aws-federation-comes-to-github-actions.html, who has some examples in CloudFormation templates if that's your preferred thing.
Update GitHub (accidentally) changed their thumbprint and the example has been updated. See for more information. The new thumbprint is 6938fd4d98bab03faadb97b34396831e3780aea1 but it's possible to have multiple thumbprints in the IAM OpenID connector.
Github recently updated their certifcate chain and the thumprint has changed from the one mentioned above a031c46782e6e6c662c2c87c76da9aa62ccabd8e to 6938fd4d98bab03faadb97b34396831e3780aea1
Github Issue: https://github.com/aws-actions/configure-aws-credentials/issues/357
For some reason I keep getting this with the provided answer of AlBlue:
"message":"Can't issue ID_TOKEN for audience 'githubactions'."
Deploying the stack provided in this post, does work however.
I am trying to deploy the new changes to kubernetes cluster using the Google Cloud Provider CloudBuild. Whenever I make some changes the trigger is working fine and its starting a new build but here is the issue I am getting with this cloudbuild.yaml.
cloudbuild.yaml
steps:
#step1
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/cloudbuildtest-image', '.' ]
#step 2
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/cloudbuildtest-image']
#step 3 for testing
name: 'gcr.io/cloud-builders/kubectl'
args: ['get', 'pods']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=cloudbuild-test'
#STEP-4
images:
- 'gcr.io/$PROJECT_ID/cloudbuildtest-image'
Step 1 and 2 are working fine but the issue is with the step3 where for testing purpose I simply ran the get pods command to test if it will work or not. Here is the issue I am getting in the logs.
Running: gcloud container clusters get-credentials --project="journeyfoods-io" --zone="us-central1-a" "cloudbuild-test"
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission(s) for "projects/XXXX/zones/us-central1-a/clusters/cloudbuild-test".
What permissions is it looking for? Do I need to do some
authentication before running the steps or What exactly am I missing?
The steps of a Cloud Build build are executed using with the [PROJECT_NUMBER]#cloudbuild.gserviceaccount.com service account. From the Cloud Build documentation page about this topic:
When you enable the Cloud Build API, the service account is
automatically created and granted the Cloud Build Service Account role
for your project. This role is sufficient for several tasks,
including:
Fetching code from your project's Cloud Source Repository
Downloading files from any Cloud Storage bucket owned by your project
Saving build logs in Cloud Logging
Pushing Docker images to Container Registry
Pulling base images from Container Registry
But this service account does not have permissions for certain actions by default ( in particular the container.clusters.get permission is not grated by default). So you need to grant it with a proper IAM role. In your case the Kubernetes Engine Developer role contains the container.clusters.get permission as you can see in this page.
I am trying to set AWS codepipeline and use github as the source by using cloudformation. The github repository is owned by an organization and I have admin access to it.
I was able to create webhook and successfully create entire service through codepipeline UI. But when I am trying to do same thing through Cloudformation Document, it returns error
Webhook could not be registered with GitHub. Error cause: Not found [StatusCode: 404, Body: {"message":"Not Found","documentation_url":"https://developer.github.com/v3/repos/hooks/#create-a-hook"}]
I used same credential both times (OAuth token in cloudformation and actual login popups in codepipeline UI), but when I do it through Cloudformation it failed.
I suspected my cloudformation document was the issue. But when I create my own repository, cloudformation successfully create webhook and created full codepipeline service.
Below is the summary of tests I did to understand where it went wrong.
Codepipeline UI. Organization Github Repo. It asked to login the github. Logged in with my admin credential => successfully created webhook and services.
Cloudformation. Organization Github Repo. Used OAuth Token from admin credential with repo and admin:repo_hook enabled. => Gave out error above
Cloudformation. Personal Github Repo. Used Oauth Token from admin credential with repo and admin:repo_hook enabled => successfully created webhook and services
The following is portion of cloudformation document where I create Webhook.
AppPipelineWebhook:
Type: 'AWS::CodePipeline::Webhook'
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: !Ref GitHubSecret
Filters:
- JsonPath: $.ref
MatchEquals: 'refs/heads/{Branch}'
TargetPipeline: !Ref cfSSMAutomationDev
TargetAction: SourceAction
Name: AppPipelineWebhook
TargetPipelineVersion: !GetAtt cfSSMAutomationDev.Version
RegisterWithThirdParty: true
So I am not sure what is wrong. My suspicion is that OAuth token requires more privilege. Does anyone have similar experience with this? Any suggestion is much appreciated
Even I was facing the same issue, by seeing codepipeline UI configuration's Repository I used
{
"Configuration": {
"Owner": "myUserName",
"Repo": "orgname/repository-name",
}
}
so cloudformation was checking for the repository myUserName/orgname/repository-name which wasn't exist.
It got solved after following the below solution:
{
"Configuration": {
"Owner": "orgname",
"Repo": "repository-name",
}
}
private repo -> ownerName: YourUserName
organisation repo -> ownerName: OrganisationName
I want to integrate my code from Bitbucket into AWS Code Pipeline. I unable to find proper examples on the same. My source code is in .Net.
Can someone please guide me.
Thanks.
You can integrate Bitbucket with AWS CodePipeline by using webhooks that call to an AWS API Gateway, which invokes a Lambda function (which calls into CodePipeline). There is an AWS blog that walks you thru this: Integrating Git with AWS CodePipeline
BitBucket has a service called PipeLines which can deploy code to AWS services. Use Pipelines to package and push updates from your master branch to an S3 bucket which is hooked up to CodePipeline
Note:
You must enable PipeLines in your repository
PipeLines expects a file named bitbucket-pipelines.yml which must be placed inside your project
Ensure you set your accounts AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the BitBucket Pipelines UI. This comes with an option to encrypt so all is safe and secure
Here is an example bitbucket-pipelines.yml which copies the contents of a directory named DynamoDb to an S3 bucket
pipelines:
branches:
master:
- step:
script:
- apt-get update # required to install zip
- apt-get install -y zip # required if you want to zip repository objects
- zip -r DynamoDb.zip .
- apt-get install -y python-pip
- pip install boto3==1.3.0 # required for s3_upload.py
# the first argument is the name of the existing S3 bucket to upload the artefact to
# the second argument is the artefact to be uploaded
# the third argument is the the bucket key
- python s3_upload.py LandingBucketName DynamoDb.zip DynamoDb.zip # run the deployment script
Here is a working example of a Python upload script which should be deployed alongside the bitbucket-pipelines.yml file in your project. Above I have named my Python script s3_upload.py:
from __future__ import print_function
import os
import sys
import argparse
import boto3
from botocore.exceptions import ClientError
def upload_to_s3(bucket, artefact, bucket_key):
"""
Uploads an artefact to Amazon S3
"""
try:
client = boto3.client('s3')
except ClientError as err:
print("Failed to create boto3 client.\n" + str(err))
return False
try:
client.put_object(
Body=open(artefact, 'rb'),
Bucket=bucket,
Key=bucket_key
)
except ClientError as err:
print("Failed to upload artefact to S3.\n" + str(err))
return False
except IOError as err:
print("Failed to access artefact in this directory.\n" + str(err))
return False
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket")
parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
parser.add_argument("bucket_key", help="Name of the S3 Bucket key")
args = parser.parse_args()
if not upload_to_s3(args.bucket, args.artefact, args.bucket_key):
sys.exit(1)
if __name__ == "__main__":
main()
Here is an example CodePipeline with only one Source stage (you may want to add more):
Pipeline:
Type: "AWS::CodePipeline::Pipeline"
Properties:
ArtifactStore:
# Where codepipeline copies and unpacks the uploaded artifact
# Must be versioned
Location: !Ref "StagingBucket"
Type: "S3"
DisableInboundStageTransitions: []
RoleArn:
!GetAtt "CodePipelineRole.Arn"
Stages:
- Name: "Source"
Actions:
- Name: "SourceTemplate"
ActionTypeId:
Category: "Source"
Owner: "AWS"
Provider: "S3"
Version: "1"
Configuration:
# Where PipeLines uploads the artifact
# Must be versioned
S3Bucket: !Ref "LandingBucket"
S3ObjectKey: "DynamoDb.zip" # Zip file that is uploaded
OutputArtifacts:
- Name: "DynamoDbArtifactSource"
RunOrder: "1"
LandingBucket:
Type: "AWS::S3::Bucket"
Properties:
AccessControl: "Private"
VersioningConfiguration:
Status: "Enabled"
StagingBucket:
Type: "AWS::S3::Bucket"
Properties:
AccessControl: "Private"
VersioningConfiguration:
Status: "Enabled"
Reference to this Python code along with other examples can be found here: https://bitbucket.org/account/user/awslabs/projects/BP
Follow up for anyone finding this now:
AWS CodeBuild now supports Atlassian Bitbucket Cloud as a Source Type, making it the fourth alongside the existing supported sources: AWS CodeCommit, Amazon S3, and GitHub.
This means you no longer need to implement a lambda function as suggested in #Kirkaiya's link to integrate with Bitbucket - it is still a valid solution depending on your use case or if you're integrating with the non-cloud version of Bitbucket.
Posted on the AWS blog Aug 10, 2017 -
https://aws.amazon.com/about-aws/whats-new/2017/08/aws-codebuild-now-supports-atlassian-bitbucket-cloud-as-a-source-type/
And to clarify for the commenters, this link talks about integrating with CodeBuild not CodePipeline: you still need to find a way to trigger the pipeline, but when it is triggered CodeBuild will pull the code from BitBucket rather than having to copy the code to S3 or AWS CodeCommit before triggering the pipeline.
If you are looking for a way to automate your build deploy process using AWS CodePipeline with source as bitbucket without using lambdas do the following steps.
Create CodeBuild which supports BitBucket as of now. https://docs.aws.amazon.com/codebuild/latest/userguide/sample-bitbucket-pull-request.html
Also create a web-hook which rebuilds every time a code is pushed to repository.
You cannot use a web-hook if you use a public Bitbucket repository.
Code Build will trigger automatically on commit,and will create a zip file and store it in s3 bucket.
Create Code Pipeline with source as S3,and deploy it using codeDeploy. As S3 is a valid source.
Note -1. In order to create a webhook , you need to have bitbucket admin access
So the process from commit to deployment is totally automated.
2. As of now(April'19) CodeBuild does not support webhook on Pull request merge.If you want you can create trigger which will trigger code build say every day.
You can also create triggers to build code periodically https://docs.aws.amazon.com/codebuild/latest/userguide/trigger-create.html
Update - (June'19) - Pull Request builds for PR_Merge is supported in CodeBuild now. Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/sample-bitbucket-pull-request.html#sample-bitbucket-pull-request-filter-webhook-events.
an alternative to the answer of #binary, and clarification to #OllyTheNinja's answer:
in short: let CodeBuild listen to Bitbucket's Webhook and write to an S3 object. in the pipeline listen to the update event of the latter.
In AWS codesuite
define a CodeBuild project, with
Source: Bitbucket that uses its WebHook to listen to git-push events.
Buildspec: build the project according to buildspec.yml
Artifact store output of the build directly to an S3 container.
define the pipeline:
Source: listen to updates to the previously defined S3 object
remove Build step
add other steps, configure deploy step
AWS CodeBuild Now Supports Building Bitbucket Pull Requests, and we can make use of this for a better solution without using webhooks/API Gateway/Lambda
You can use a CodeBuild to zip your code to s3 and use that as a source in your CodePipeline
https://lgallardo.com/2018/09/07/codepipeline-bitbucket
For me, the best way to integrate Bitbucket with any AWS Service, is to use Pipelines to mirror any commit into a (mirror) AWS CodeCommit repo. From there, you have prime integration into any service on AWS.
You can find an excellent how-to: here :
In 12/2019 AWS launched a support for Atlassian Bitbucket Cloud in beta mode.
So now you can natively integrate your AWS CodePipeline with Bitbucket Cloud