How to run my jenkins pipeline code in AWS CodeBuild? - amazon-web-services

I can trigger my AWS pipeline from jenkins but I don't want to create buildspec.yaml and instead use the pipeline script which already works for jenkins.

In order to user Codebuild you need to provide the Codebuild project with a buildspec.yaml file along with your source code or incorporate the commands into the actual project.
However, I think you are interested in having the creation of the buildspec.yaml file done within the Jenkins pipeline.
Below is a snippet of a stage within a Jenkinsfile, it creates a build spec file for building docker images and then sends the contents of the workspace to a codebuild project. This uses the plugin for Codebuild.
stage('Build - Non Prod'){
String nonProductionBuildSpec = """
version: 0.1
phases:
pre_build:
commands:
- \$(aws ecr get-login --registry-ids <number> --region us-east-1)
build:
commands:
- docker build -t ces-sample-docker .
- docker tag $NAME:$TAG <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
post_build:
commands:
- docker push <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
""".replace("\t"," ")
writeFile file: 'buildspec.yml', text: nonProductionBuildSpec
//Send checked out files to AWS
awsCodeBuild projectName: "<codebuild-projectname>",region: "us-east-1", sourceControlType: "jenkins"
}
I hope this gives you an idea of whats possible.
Good luck!
Patrick

You will need to write a buildspec for the commands that you want AWS CodeBuild to run. If you use the CodeBuild plugin for Jenkins, you can add that to your Jenkins pipeline and use CodeBuild as a Jenkins build slave to execute the commands in your buildspec.
See more details here: https://docs.aws.amazon.com/codebuild/latest/userguide/jenkins-plugin.html

#hynespm - excellent example mate.
Here is another one based off yours but with stripIndent() and "withAWS" to switch roles:
#!/usr/bin/env groovy
def cbResult = null
pipeline {
.
.
.
script {
echo ("app_version TestwithAWS value : " + "${app_version}")
String buildspec = """\
version: 0.2
env:
parameter-store:
TOKEN: /some/token
phases:
pre_build:
commands:
- echo "List files...."
- ls -l
- echo "TOKEN is ':' \${TOKEN}"
build:
commands:
- echo "build':' Do something here..."
- echo "\${CODEBUILD_SRC_DIR}"
- ls -l "\${CODEBUILD_SRC_DIR}"
post_build:
commands:
- pwd
- echo "postbuild':' Done..."
""".stripIndent()
withAWS(region: 'ap-southeast-2', role: 'CodeBuildWithJenkinsRole', roleAccount: '123456789123', externalId: '123456-2c1a-4367-aa09-7654321') {
sh 'aws ssm get-parameter --name "/some/token"'
try {
cbResult = awsCodeBuild projectName: 'project-lambda',
sourceControlType: 'project',
credentialsType: 'keys',
awsAccessKey: env.AWS_ACCESS_KEY_ID,
awsSecretKey: env.AWS_SECRET_ACCESS_KEY,
awsSessionToken: env.AWS_SESSION_TOKEN,
region: 'ap-southeast-2',
envVariables: '[ { GITHUB_OWNER, special }, { GITHUB_REPO, project-lambda } ]',
artifactTypeOverride: 'S3',
artifactLocationOverride: 'special-artifacts',
overrideArtifactName: 'True',
buildSpecFile: buildspec
} catch (Exception cbEx) {
cbResult = cbEx.getCodeBuildResult()
}
}
} //script
.
.
.
}

Related

error configuring S3 Backend: no valid credential sources for S3 Backend found

I've been trying to add CI/CD pipeline circleci to my AWS project written in Terraform.
The problem is, terraform init plan apply works in my local machine, but it throws this error in CircleCI.
Error -
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
My circleCi config is this -
version: 2.1
orbs:
python: circleci/python#1.5.0
# terraform: circleci/terraform#3.1.0
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
# Invoke jobs via workflows
workflows:
.......
And my init.sh is -
cd ./Terraform
echo "arg: $1"
if [[ "$1" == "dev" || "$1" == "stage" || "$1" == "prod" ]];
then
echo "environement: $1"
terraform init -migrate-state -backend-config=backend.$1.conf -var-file=terraform.$1.tfvars
else
echo "Wrong Argument"
echo "Pass 'dev', 'stage' or 'prod' only."
fi
My main.tf is -
provider "aws" {
profile = "${var.profile}"
region = "${var.region}"
}
terraform {
backend "s3" {
}
}
And `backend.dev.conf is -
bucket = "bucket-name"
key = "mystate.tfstate"
region = "ap-south-1"
profile = "dev"
Also, my terraform.dev.tfvars is -
region = "ap-south-1"
profile = "dev"
These work perfectly with in my local unix (mac m1), but throws error in circleCI for backend. Yes, I've added environment variables with my aws_secret_access_key and aws_access_key_id, still it doesn't work.
I've seen so many tutorials and nothing seems to solve this, I don't want to write aws credentials in my code. any idea how I can solve this?
Update:
I have updated my pipeline to this -
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
# Checkout the code as the first step. This is a dedicated
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
aws-cli-cred-setup:
executor: aws-cli/default
steps:
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
terraform-setup:
executor: aws-cli/default
working_directory: ~/project
steps:
- checkout
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
context: terraform
# Invoke jobs via workflows
workflows:
dev_workflow:
jobs:
- build:
filters:
branches:
only: main
- aws-cli-cred-setup
# context: aws
- terraform-setup:
requires:
- aws-cli-cred-setup
But it still throws the same error.
You have probably added the aws_secret_access_key and aws_access_key_id to your project settings. But I don't see them being used in your pipeline configuration. You should do something like, so they are known during runtime:
version: 2.1
orbs:
python: circleci/python#1.5.0
jobs:
build:
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
environment:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
steps:
- run:
name: Check python version
command: python --version
...
I would advise you read about environment variables in the documentation.
Ok I managed to fix this issue. You have to remove profile from provider and other .tf files files.
So my main.tf file is -
provider "aws" {
region = "${var.region}"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
}
}
And backend.dev.conf is -
bucket = "bucket"
key = "dev/xxx.tfstate"
region = "ap-south-1"
And most importantly, You have to put acccess key, access key id and region inside circleci-> your project -> environment variable,
And you have to setup AWS CLI on circleci, apparently inside a job config.yml-
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
plan-apply:
executor: aws-cli/default
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
working_directory: ~/project
steps:
- checkout
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
- run:
name: Init infrastructure
command: sh scripts/init.sh dev
- run:
name: Plan infrastructure
command: sh scripts/plan.sh dev
- run:
name: Apply infrastructure
command: sh scripts/apply.sh dev
.....
.....
This solved the issue. But you have to init, plan and apply inside the job where you set up aws cli. I might be wrong to do setup and plan inside same job, but I'm learning now and this did the job. API changed and old tutorials don't work nowadays.
Comment me your suggestions if any.
Adding a profile to your backend will fix this issue. Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
bucket = "terraform-state"
region = "ap-south-1"
key = "dev/xxx.tfstate"
profile = "myAwsCliProfile"
}
}

How to writing the buildspec yaml code in cdk

I making the codebuild with cdk
It can accept the buildspec as yaml, however ,how can I do the same thing in cdk?
What I want to do is like this, of course it doesn't work though,
I forcely put the yaml code in commands.
const buildProject = new codebuild.PipelineProject(this, 'project', {
environment: {// I guess I need to select ubuntu and image 4.0},
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
commands:['
version: 0.2
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- apt-get install jq -y
- ContainerName="tnkDjangoContainer"
- ImageURI=$(cat imageDetail.json | jq -r '.ImageURI')
- printf '[{"name":"CONTAINER_NAME","imageUri":"IMAGE_URI"}]' > imagedefinitions.json
- sed -i -e "s|CONTAINER_NAME|$ContainerName|g" imagedefinitions.json
- sed -i -e "s|IMAGE_URI|$ImageURI|g" imagedefinitions.json
- cat imagedefinitions.json
artifacts:
files:
- imagedefinitions.json
',
],
},
}
})
});
And also I guess I need to choose the image to do the buildspec such as Ubuntu
Where can I set these?
CDK does not expose a method to inline a YAML buildspec at synth-time. You could do this yourself by parsing existing YAML into a JS object and passing the result to BuildSpec.fromObject.
CDK's codebuild.Project gives you several other ways to provide a buildSpec:
BuildSpec.fromObject inlines a buildspec from key-value pairs at synth-time. It should follow the CodeBuild buildspec format. CDK will output a stringified JSON buildspec in the CloudFormation template. If you want CDK to output YAML instead, use fromObjectToYaml. Both methods take key-value pairs (type: [key: string]: any;) as input, so TS can't offer much typechecking help.
BuildSpec.fromSourceFilename tells CodeBuild to use a buildspec file in your source at run-time. The filename is passed in the CloudFormation template.
Here's an example of parsing a YAML string into inlined YAML output, using the yaml package. Note that the environment is defined outside the buildspec:
import * as yaml from 'yaml';
const fromYaml = yaml.parse(`
version: '0.2'
phases:
build:
commands:
- npm run build
`);
new codebuild.Project(this, 'YamlInYamlOutProject', {
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_5_0, // Ubuntu Standard 5
},
buildSpec: codebuild.BuildSpec.fromObjectToYaml(fromYaml),
});

CodeBuild shows SUCCEEDED; however, it should be saying FAILED when using Packer

I am running into an issue which I know has to do with my buildspec.yml file:
phases:
install:
runtime-versions:
python: 3.8
pre_build:
commands:
- echo "Installing Packer"
- curl -o packer.zip https://releases.hashicorp.com/packer/1.7.2/packer_1.7.2_linux_amd64.zip && unzip packer.zip
- echo "Validating Packer template"
- ./packer validate pipeline/build/${FUNCTION}-build.json
build:
commands:
- ./packer build -color=false pipeline/build/${FUNCTION}-build.json | tee build.log
post_build:
commands:
# Get the ARN of our Lambda notifier
- SLACK_ARN=$(aws cloudformation list-exports | jq -r '.["Exports"][] | select(.Name == "notify_slack_arn") | .Value')
# Send a Slack notification
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" -eq 1 ]
then
aws lambda invoke --cli-binary-format raw-in-base64-out --function-name ${SLACK_ARN} --payload "{ \"LambdaInvokeEvent\": { \"message\": \"Daily AMI Build for ${FUNCTION} SUCCESSFUL!\", \"slack_url\": \"${SLACK_URL}\" } }" slack_output.log
else
aws lambda invoke --cli-binary-format raw-in-base64-out --function-name ${SLACK_ARN} --payload "{ \"LambdaInvokeEvent\": { \"message\": \"Daily AMI Build for ${FUNCTION} FAILED!\", \"slack_url\": \"${SLACK_URL}\" } }" slack_output.log
fi
- echo "Build completed on $(date)"
artifacts:
files:
- "**/*"
discard-paths: no
So from my code that builds in CodeBuild, this is what happens:
Even though it fails, CodeBuild says SUCCEEDED which is driving me nuts! It is suppose to fail and give a failing notification. Does this have to do with my buildspec.yml file or my bash script that is running? Thanks!
I think that happens because of your | tee build.log. packer fails, but tee works, so as this being the last command executed, build stage succeeds.

New line stripped from first line of Codebuild build spec when deploying with Serverless Framework

I am deploying a build spec for AWS codebuild using Serverless Framework. When I deploy, the new line after the first line is absent in the build spec. This resource previously deployed without a problem and I cannot see anything I have done to break it. Is this a problem on my end or a bug with Serverless/CloudFormation?
Below is the CloudFormation template and the resulting build spec copied from the AWS console.
Resources:
CodeBuild:
Type: 'AWS::CodeBuild::Project'
Properties:
Name: sls-retrobase-frontend-CodeBuild-${opt:stage}
ServiceRole: !GetAtt CodeBuildRole.Arn
Artifacts:
Type: CODEPIPELINE
Name: sls-retrobase
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
Source:
Type: CODEPIPELINE
BuildSpec: !Sub
- >
version: 0.2
phases:
pre_build:
commands:
- echo List directory files...
- ls
- echo Installing source NPM dependencies...
- npm install
build:
commands:
- echo List active directory...
- ls
- echo Inserting Api Url into config.json from environment
- node makeConfig.js ${apiUrl} ${auth0Audience} ${auth0Domain} ${auth0ClientId}
- echo Build started on `date`
- npm run build
post_build:
commands:
- echo List build directory...
- ls ./build
- aws s3 cp --recursive --acl public-read ./build s3://${Website}
artifacts:
files:
- '**/*'
- apiUrl: !Join ['', [ "https://", !Ref QueryRestApi, ".execute-api.us-east-1.amazonaws.com/${opt:stage}/sls-retrobase-${opt:stage}"]]
websiteUrl: !GetAtt Website.WebsiteURL
auth0Audience: '***'
auth0Domain: '***'
auth0ClientId: '***'
build spec:
version: 0.2 phases:
pre_build:
commands:
- echo List directory files...
- ls
- echo Installing source NPM dependencies...
- npm install
build:
commands:
- echo List active directory...
- ls
- echo Inserting Api Url into config.json from environment
- node makeConfig.js https://h0suk54yw0.execute-api.us-east-1.amazonaws.com/test/sls-retrobase-test https://sls-retrobase https://dev-y33gimcf.eu.auth0.com/ Xn6SDc43vE8P0sQHkVLtiBSBVFT5rJMU
- echo Build started on `date`
- npm run build
post_build:
commands:
- echo List build directory...
- ls ./build
- aws s3 cp --recursive --acl public-read ./build s3://serverless-retrobase-resources-test-website-7cvhqlgbkfj7
artifacts:
files:
- '**/*'
This is probably because of your use of >. Please change it to |:
BuildSpec: !Sub
- |
version: 0.2
phases:
Alternatively, fix your spaces when using >. > and your code are not aligned.

How to pass the correct project path to bitbucket pipeline?

I want to deploy aws lamda .net core project using bit bucket pipeline
I have created bitbucket-pipelines.yml like below but after build run getting error -
MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.
file code -
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the commands below to build your repository.
- export PROJECT_NAME=TestAWS/AWSLambda1/AWSLambda1.sln
- dotnet restore
- dotnet build $PROJECT_NAME
- pipe: atlassian/aws-lambda-deploy:0.2.1
variables:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: 'us-east-1'
FUNCTION_NAME: 'my-lambda-function'
COMMAND: 'update'
ZIP_FILE: 'code.zip'
project structure is like this -
The problem is here:
PROJECT_NAME=TestAWS/AWSLambda1/AWSLambda1.sln
This is the incorrect path. Bitbucket Pipelines will use a special path in the Docker image, something like /opt/atlassian/pipelines/agent/build/YOUR_PROJECT , to do a Git clone of your project.
You can see this when you click on the "Build Setup" step in the Pipelines web console:
Cloning into '/opt/atlassian/pipelines/agent/build'...
You can use a pre-defined environment variable to retrieve this path: $BITBUCKET_CLONE_DIR , as described here: https://support.atlassian.com/bitbucket-cloud/docs/variables-in-pipelines/
Consider something like this in your yml build script:
script:
- echo $BITBUCKET_CLONE_DIR # Debug: Print the $BITBUCKET_CLONE_DIR
- pwd # Debug: Print the current working directory
- find "$(pwd -P)" -name AWSLambda1.sln # Debug: Show the full file path of AWSLambda1.sln
- export PROJECT_NAME="$BITBUCKET_CLONE_DIR/AWSLambda1.sln"
- echo $PROJECT_NAME
- if [ -f "$PROJECT_NAME" ]; then echo "File exists" ; fi
# Try this if the file path is not as expected
- export PROJECT_NAME="$BITBUCKET_CLONE_DIR/AWSLambda1/AWSLambda1.sln"
- echo $PROJECT_NAME
- if [ -f "$PROJECT_NAME" ]; then echo "File exists" ; fi