AWS CDK CodePipeline deploying app and CDK - amazon-web-services

I'm using the AWS CDK with typescript and I'd like to automate my CDK and Code Package deployments.
I have 2 github repos: app-cdk and app-website.
I have setup a CodePipeline as follows:
const pipeline = new CodePipeline(this, 'MyAppPipeline', {
pipelineName: 'MyAppPipeline',
synth: new ShellStep('Synth', {
input: CodePipelineSource.gitHub(`${ORG_NAME}/app-cdk`, BRANCH_NAME, {
}),
commands: ['npm ci', 'npm run build', 'npx cdk synth']
})
});
and added a beta stage as follows
pipeline.addStage(new MyAppStage(this, 'Beta', {
env: {account: 'XXXXXXXXX', region: 'us-east-2' }
}))
This works fine when I push code to my CDK code package, and deploys new resources. How can I add my website repo as a source to kickoff this pipeline, build in a different manner, and deploy the assets to the necessary resources? Shouldn't that be a part of the CodePipeline's source and build stages?

I have encountered similar scenario, where I had to create a CDK Pipeline for multiple Static S3 sites in a repository.
Soon, It became evident, that this had to be done using two stacks as Pipeline requires step to be of type Stage and does not support Construct.
Whereas my Static S3 Websites was a construct (BucketDeployment).
The way in which I handled this integration is as follows
deployment_code_build = cb.Project(self, 'PartnerS3deployment',
project_name='PartnerStaticS3deployment',
source=cb.Source.git_hub(owner='<github-org>',
repo='<repo-name>', clone_depth=1,
webhook_filters=[
cb.FilterGroup.in_event_of(
cb.EventAction.PUSH).and_branch_is(
branch_name="main")]),
environment=cb.BuildEnvironment(
build_image=cb.LinuxBuildImage.STANDARD_5_0
))
This added/provisioned a Codebuild Project which would dynamically deploy the changesets of cdk ls
The above Codebuild Project will need a buildspecfile in your root of the repo with the following code (for reference)
version: 0.2
phases:
install:
commands:
- echo Entered in install phase...
- npm install -g aws-cdk
- cdk --version
build:
commands:
- pwd
- cd cdk_pipeline_static_websites
- ls -lah
- python -m pip install -r requirements.txt
- nohup ./parallel_deploy.sh & echo $! > pidfile && wait $(cat pidfile)
finally:
- echo Build completed on `date`
The contents of parallel_deploy.sh are as follows
#!/bin/bash
for stack in $(cdk list);
do
cdk deploy $stack --require-approval=never &
done;
While this works great, There has to be a simpler alternative which can directly import other stacks/constructs in the CDK Pipeline class.

Related

AWS CodeBuild batch build-list not running phases for each build identifier

I'm new to AWS CodeBuild and have been trying to work out how to run the parts of the build in parallel (or even just use the same buildspec.yml for each project in my solution).
I thought the batch -> build-list was the way to go. From my understanding of the documentation this will run the phases in the buildspec for each item in the build list.
Unfortunately that does not appear to be the case - the batch section appears to be ignored and the buildspec runs the phases once, for the default environment variables held at project level.
My buildspec is
version: 0.2
batch:
fast-fail: false
build-list:
- identifier: getPrintJobNote
env:
variables:
IMAGE_REPO_NAME: getprintjobnote
FOLDER_NAME: getPrintJobNote
ignore-failure: false
- identifier: GetPrintJobFilters
env:
variables:
IMAGE_REPO_NAME: getprintjobfilters
FOLDER_NAME: GetPrintJobFilters
ignore-failure: false
phases:
pre_build:
commands:
- echo Logging into Amazon ECR
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Building lambda docker container
- echo Build path $CODEBUILD_SRC_DIR
- cd $CODEBUILD_SRC_DIR/src/$FOLDER_NAME
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Pushing to Amazon ECR
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Is there something wrong in my buildspec, does build-list not do what I think it does, or is there something else needed to be configured somewhere to enable this?
In the project configuration I found a setting for "enable concurrent build limit - optional". I tried changing this but got an error:
Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1.
This may not be related but could be because my account is new... I think the default should be 60 anyway.
Had similar problem, turned out that batch builds are a separate build type. Go to project -> start build with overrides, then select batch build.
I also split buildspec file -> 1st spec has batch config, second one has "actual" phases. Use buildspec: directive. Not sure if this is required though.
Also: if builds are hook-triggered, this also has to be configured to run batch build.

How to solve an AWS Lamba function deployment problem?

.. aaaand me again :)
This time with a very interesting problem.
Again AWS Lambda function, node.js 12, Javascript, Ubuntu 18.04 for local development, aws cli/aws sam/Docker/IntelliJ, everything is working perfectly in local and is time to deploy.
So I did set up an AWS account for tests, created and assigned an access key/secret and finally did try to deploy.
Almost at the end an error pop up aborting the deployment.
I'm showing the SAM cli version from a terminal, but the same happens with IntelliJ.
(of course I mask/change some names)
From a terminal I'm going where I have my local sandbox with the project and then :
$ sam deploy --guided
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Not found
Setting default arguments for 'sam deploy'
=========================================
Stack Name [sam-app]: MyActualProjectName
AWS Region [us-east-1]: us-east-2
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: y
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]: y
Save arguments to configuration file [Y/n]: y
SAM configuration file [samconfig.toml]: y
SAM configuration environment [default]:
Looking for resources needed for deployment: Not found.
Creating the required resources...
Successfully created!
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-7qo1hy7mdu9z
A different default S3 bucket can be set in samconfig.toml
Saved arguments to config file
Running 'sam deploy' for future deployments will use the parameters saved above.
The above parameters can be changed by modifying samconfig.toml
Learn more about samconfig.toml syntax at
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
Error: Unable to upload artifact MyFunctionName referenced by CodeUri parameter of MyFunctionName resource.
ZIP does not support timestamps before 1980
$
I spent quite some time looking around for this problem but I found only some old threads.
In theory this problems was solved in 2018 ... but probably some npm libraries I had to use contains something old ... how in the world I fix this stuff ?
In one thread I found a kind of workaround.
In the file buildspec.yml somebody suggested to add AFTER the npm install :
ls $CODEBUILD_SRC_DIR
find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
Basically the idea is to touch all the files installed after the npm install but still the error happens.
This my buildspec.yml file after the modification :
version: 0.2
phases:
install:
commands:
# Install all dependencies (including dependencies for running tests)
- npm install
- ls $CODEBUILD_SRC_DIR
- find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
pre_build:
commands:
# Discover and run unit tests in the '__tests__' directory
- npm run test
# Remove all unit tests to reduce the size of the package that will be ultimately uploaded to Lambda
- rm -rf ./__tests__
# Remove all dependencies not needed for the Lambda deployment package (the packages from devDependencies in package.json)
- npm prune --production
build:
commands:
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
I will continue to search but again I wonder if somebody here had this kind of problem and thus some suggestions/methodology about how to solve it.
Many many thanks !
Steve

Azure DevOps S3 React/MERN stack

Does anyone have any experience of using Azure DevOps to deploy React build package to AWS using their extension?
I'm stuck on uploading only the build package of npm build.
Here is my scripts so far:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm test
npm run build
- task: S3Upload#1
inputs:
awsCredentials: 'AWS Deploy User'
regionName: 'us-east-1'
bucketName: 'test'
globExpressions: '**'
createBucket: true
displayName: 'npm install and build'
The only options on the task for S3Upload that stands out is sourceFolder. They use something like "$(Build.ArtifactStagingDirectory)" but since I've never used that before that doesn't make a lot of sense to me. Would it just be as simple as like $(Build.ArtifactStagingDirectory)/build
The predefined variable $(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a, which is the local path on the agent where any artifacts are copied to before being pushed to their destination.
In your yaml pipeline, your source code is downloaded in folder $(Build.SourcesDirectory)(ie. c:\agent_work\1\s). And the npm commands in the script task all runs in this folder. So the npm build result is this folder $(Build.SourcesDirectory)\build (ie.c:\agent_work\1\s\build).
S3Upload task will upload file from $(Build.ArtifactStagingDirectory) by default. You can specifically point the sourceFolder attribute (default is $(Build.ArtifactStagingDirectory)) of S3Upload task to folder $(Build.SourcesDirectory)\build. See below:
- task: S3Upload#1
inputs:
awsCredentials: 'AWS Deploy User'
regionName: 'us-east-1'
bucketName: 'test'
globExpressions: '**'
createBucket: true
sourceFolder: '$(Build.SourcesDirectory)/build'
Another workaround is to use copy file task to copy the build results from $(Build.SourcesDirectory)\build to folder $(Build.ArtifactStagingDirectory). See example here.
- task: CopyFiles#2
inputs:
Contents: 'build/**' # Pull the build directory (React)
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Google Cloud Build - Terraform Self-Destruction on Build Failure

I'm currently facing an issue with my Google Cloud Build for CI/CD.
First, I build new docker images of multiple microservices and use Terraform to create the GCP infrastructure for the containers that they will also live in production.
Then I perform some Integration / System Tests and if everything is fine I push new versions of the microservice images to the container registry for later deployment.
My problem is, that the Terraformed infrastructure doesn't get destroyed if the cloud build fails.
Is there a way to always execute a cloud build step even if some previous steps have failed, here I would want to always execute "terraform destroy"?
Or specifically for Terraform, is there a way to define a self-destructive Terraform environment?
cloudbuild.yaml example with just one docker container
steps:
# build fresh ...
- id: build
name: 'gcr.io/cloud-builders/docker'
dir: '...'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/staging/...:latest', '-t', 'gcr.io/$PROJECT_ID/staging/...:$BUILD_ID', '.', '--file', 'production.dockerfile']
# push
- id: push
name: 'gcr.io/cloud-builders/docker'
dir: '...'
args: ['push', 'gcr.io/$PROJECT_ID/staging/...']
waitFor: [build]
# setup terraform
- id: terraform-init
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['init']
waitFor: [push]
# deploy GCP resources
- id: terraform-apply
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['apply', '-auto-approve']
waitFor: [terraform-init]
# tests
- id: tests
name: 'python:3.7-slim'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- 'pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate'
# remove GCP resources
- id: terraform-destroy
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['destroy', '-auto-approve']
waitFor: [tests]
Google Cloud Build doesn't yet support allow_failure or some similar mechanism as mentioned in this unsolved but closed issue.
What you can do, and as mentioned in the linked issue, is to chain shell conditional operators.
If you want to run a command on failure then you can do something like this:
- id: tests
name: 'python:3.7-slim'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate || echo "This failed!"
This would run your test as normal and then echo This failed! to the logs if the tests fail. If you want to run terraform destroy -auto-approve on the failure then you would need to replace the echo "This failed!" with terraform destroy -auto-approve. Of course you will also need the Terraform binaries in the Docker image you are using so will need to use a custom image that has both Python and Terraform in it for that to work.
- id: tests
name: 'example-customer-python-and-terraform-image:3.7-slim-0.12.28'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate || terraform destroy -auto-approve ; false"
The above job also runs false at the end of the command so that it will return a non 0 exit code and mark the job as failed still instead of only failing if terraform destroy failed as well.
An alternative to this would be to use something like Test Kitchen which will automatically stand up infrastructure, run the necessary verifiers and then destroy it at the end all in a single kitchen test command.
It's probably also worth mentioning that your pipeline is entirely serial so you don't need to use waitFor. This is mentioned in the Google Cloud Build documentation:
A build step specifies an action that you want Cloud Build to perform.
For each build step, Cloud Build executes a docker container as an
instance of docker run. Build steps are analogous to commands in a
script and provide you with the flexibility of executing arbitrary
instructions in your build. If you can package a build tool into a
container, Cloud Build can execute it as part of your build. By
default, Cloud Build executes all steps of a build serially on the
same machine. If you have steps that can run concurrently, use the
waitFor option.
and
Use the waitFor field in a build step to specify which steps must run
before the build step is run. If no values are provided for waitFor,
the build step waits for all prior build steps in the build request to
complete successfully before running. For instructions on using
waitFor and id, see Configuring build step order.

Docker image build as AWS CodePipeline step

I was able to setup integration between github and AWS CodePipeline, so now my code is uploaded to S3 after a push event by a lambda function. That works very well.
A new ZIP with source code on S3 trigger a pipeline, which builds the code. That's fine. Now I'd like to also build a docker image for the project.
The first problem is that you can't mix a project (nodejs) build and docker build. That's fine, makes sense. Next issue is that you can't have another buildspec.yml for the docker build. You have specify the build commands manually, ok, that works as a workaround.
The biggest problem though, or lack of my understanding, is how to put the docker build as part of the pipeline? First build step build the project, the the next build step builds the docker image. Two standalone AWS CodeBuilds.
The thing is that a pipeline build step have to produce an artifact on the output. But a docker build doesn't produce any files and it looks that the final docker push after docker build is not qualified as an artifact by the pipeline service.
Is there a way how to do it?
Thanks
A bit late, but hopefully will be helpful for someone. You should have the docker image published as part of your post_build phase commands. Here's an example of a buildspec.yml:
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region $AWS_REGION)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE .
- "docker tag $IMAGE $REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}"
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- "docker push $REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}"
- "echo {\\\"image\\\":\\\"$REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}\\\"} > image.json"
artifacts:
files:
- 'image.json'
As you can see, the CodeBuild project expects few parameters - AWS_REGION, REPO and IMAGE and publishes the image on AWS ECR (but you can use registry of your choice). It also uses the existing CODEBUILD_BUILD_ID environment variable to extract dynamic value for the image tag. After the image is pushed, it creates json file with the full path to the image and publishes it as an artifact for CodePipeline to use.
For this to work, the CodeBuild project "environment image" should be of type "docker" with the "priviledged" flag activated. When creating the CodeBuild project in your pipeline, you can also specify the environment variables that are used the buildspec file above.
There is a good tutorial on this topic here:
http://queirozf.com/entries/using-aws-codepipeline-to-automatically-build-and-deploy-your-app-stored-on-github-as-a-docker-based-beanstalk-application
Sorry about the inconvenience. Making it less restrictive is in our roadmap. Meanwhile, in order to use CodeBuild action, you can use a dummy file as the output artifact.