how to have successful deploy in AWS pipline with artifacts - amazon-web-services

When I try to deploy my project with pipleline, some times I am getting below failer.
Can you advice me what is wrong?
Action execution failed due:
Action execution failed
ChangeSet [abc-changeset] does not exist (Service: AmazonCloudFormation; Status Code: 404; Error Code: ChangeSetNotFound; Request ID: f49ef4e7-6971-4ea1-9467-05c2213c7bc4)
and after press retry problem will solve. would you mind help me to fix it?
my buildspec.yml is as below:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
pre_build:
commands:
- echo Install source NPM dependencies...
- npm install
build:
commands:
- export BUCKET=abc_bucket
- echo copy file to S3 bucket...
- aws s3 cp openapi.yml s3://abc_bucket/openapi.yml
- echo packaging files by using cloudformation...
- aws cloudformation package --template-file template.yml --s3-bucket $BUCKET --output-template-file outputtemplate.yml
artifacts:
type: zip
files:
- template.yml
- outputtemplate.yml

The issue you are noticing is because both the Actions 'Create or Replace Change Set' and 'Execute Change Set' have been added to the same Action Group in the stage 'Deploy' which is creating a race-condition between change set creation and execution. To fix the issue, please create a new Action group and add the 'execute-changeset' to that new Action group.

if anybody is having this issue yet, just add "runOrder" to the action, 1 for replace o create changeset and 2 for execute changeSet

Related

AWS CDK CodePipeline deploying app and CDK

I'm using the AWS CDK with typescript and I'd like to automate my CDK and Code Package deployments.
I have 2 github repos: app-cdk and app-website.
I have setup a CodePipeline as follows:
const pipeline = new CodePipeline(this, 'MyAppPipeline', {
pipelineName: 'MyAppPipeline',
synth: new ShellStep('Synth', {
input: CodePipelineSource.gitHub(`${ORG_NAME}/app-cdk`, BRANCH_NAME, {
}),
commands: ['npm ci', 'npm run build', 'npx cdk synth']
})
});
and added a beta stage as follows
pipeline.addStage(new MyAppStage(this, 'Beta', {
env: {account: 'XXXXXXXXX', region: 'us-east-2' }
}))
This works fine when I push code to my CDK code package, and deploys new resources. How can I add my website repo as a source to kickoff this pipeline, build in a different manner, and deploy the assets to the necessary resources? Shouldn't that be a part of the CodePipeline's source and build stages?
I have encountered similar scenario, where I had to create a CDK Pipeline for multiple Static S3 sites in a repository.
Soon, It became evident, that this had to be done using two stacks as Pipeline requires step to be of type Stage and does not support Construct.
Whereas my Static S3 Websites was a construct (BucketDeployment).
The way in which I handled this integration is as follows
deployment_code_build = cb.Project(self, 'PartnerS3deployment',
project_name='PartnerStaticS3deployment',
source=cb.Source.git_hub(owner='<github-org>',
repo='<repo-name>', clone_depth=1,
webhook_filters=[
cb.FilterGroup.in_event_of(
cb.EventAction.PUSH).and_branch_is(
branch_name="main")]),
environment=cb.BuildEnvironment(
build_image=cb.LinuxBuildImage.STANDARD_5_0
))
This added/provisioned a Codebuild Project which would dynamically deploy the changesets of cdk ls
The above Codebuild Project will need a buildspecfile in your root of the repo with the following code (for reference)
version: 0.2
phases:
install:
commands:
- echo Entered in install phase...
- npm install -g aws-cdk
- cdk --version
build:
commands:
- pwd
- cd cdk_pipeline_static_websites
- ls -lah
- python -m pip install -r requirements.txt
- nohup ./parallel_deploy.sh & echo $! > pidfile && wait $(cat pidfile)
finally:
- echo Build completed on `date`
The contents of parallel_deploy.sh are as follows
#!/bin/bash
for stack in $(cdk list);
do
cdk deploy $stack --require-approval=never &
done;
While this works great, There has to be a simpler alternative which can directly import other stacks/constructs in the CDK Pipeline class.

AWS CodeBuild batch build-list not running phases for each build identifier

I'm new to AWS CodeBuild and have been trying to work out how to run the parts of the build in parallel (or even just use the same buildspec.yml for each project in my solution).
I thought the batch -> build-list was the way to go. From my understanding of the documentation this will run the phases in the buildspec for each item in the build list.
Unfortunately that does not appear to be the case - the batch section appears to be ignored and the buildspec runs the phases once, for the default environment variables held at project level.
My buildspec is
version: 0.2
batch:
fast-fail: false
build-list:
- identifier: getPrintJobNote
env:
variables:
IMAGE_REPO_NAME: getprintjobnote
FOLDER_NAME: getPrintJobNote
ignore-failure: false
- identifier: GetPrintJobFilters
env:
variables:
IMAGE_REPO_NAME: getprintjobfilters
FOLDER_NAME: GetPrintJobFilters
ignore-failure: false
phases:
pre_build:
commands:
- echo Logging into Amazon ECR
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Building lambda docker container
- echo Build path $CODEBUILD_SRC_DIR
- cd $CODEBUILD_SRC_DIR/src/$FOLDER_NAME
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Pushing to Amazon ECR
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Is there something wrong in my buildspec, does build-list not do what I think it does, or is there something else needed to be configured somewhere to enable this?
In the project configuration I found a setting for "enable concurrent build limit - optional". I tried changing this but got an error:
Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1.
This may not be related but could be because my account is new... I think the default should be 60 anyway.
Had similar problem, turned out that batch builds are a separate build type. Go to project -> start build with overrides, then select batch build.
I also split buildspec file -> 1st spec has batch config, second one has "actual" phases. Use buildspec: directive. Not sure if this is required though.
Also: if builds are hook-triggered, this also has to be configured to run batch build.

How to solve an AWS Lamba function deployment problem?

.. aaaand me again :)
This time with a very interesting problem.
Again AWS Lambda function, node.js 12, Javascript, Ubuntu 18.04 for local development, aws cli/aws sam/Docker/IntelliJ, everything is working perfectly in local and is time to deploy.
So I did set up an AWS account for tests, created and assigned an access key/secret and finally did try to deploy.
Almost at the end an error pop up aborting the deployment.
I'm showing the SAM cli version from a terminal, but the same happens with IntelliJ.
(of course I mask/change some names)
From a terminal I'm going where I have my local sandbox with the project and then :
$ sam deploy --guided
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Not found
Setting default arguments for 'sam deploy'
=========================================
Stack Name [sam-app]: MyActualProjectName
AWS Region [us-east-1]: us-east-2
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: y
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]: y
Save arguments to configuration file [Y/n]: y
SAM configuration file [samconfig.toml]: y
SAM configuration environment [default]:
Looking for resources needed for deployment: Not found.
Creating the required resources...
Successfully created!
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-7qo1hy7mdu9z
A different default S3 bucket can be set in samconfig.toml
Saved arguments to config file
Running 'sam deploy' for future deployments will use the parameters saved above.
The above parameters can be changed by modifying samconfig.toml
Learn more about samconfig.toml syntax at
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
Error: Unable to upload artifact MyFunctionName referenced by CodeUri parameter of MyFunctionName resource.
ZIP does not support timestamps before 1980
$
I spent quite some time looking around for this problem but I found only some old threads.
In theory this problems was solved in 2018 ... but probably some npm libraries I had to use contains something old ... how in the world I fix this stuff ?
In one thread I found a kind of workaround.
In the file buildspec.yml somebody suggested to add AFTER the npm install :
ls $CODEBUILD_SRC_DIR
find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
Basically the idea is to touch all the files installed after the npm install but still the error happens.
This my buildspec.yml file after the modification :
version: 0.2
phases:
install:
commands:
# Install all dependencies (including dependencies for running tests)
- npm install
- ls $CODEBUILD_SRC_DIR
- find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
pre_build:
commands:
# Discover and run unit tests in the '__tests__' directory
- npm run test
# Remove all unit tests to reduce the size of the package that will be ultimately uploaded to Lambda
- rm -rf ./__tests__
# Remove all dependencies not needed for the Lambda deployment package (the packages from devDependencies in package.json)
- npm prune --production
build:
commands:
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
I will continue to search but again I wonder if somebody here had this kind of problem and thus some suggestions/methodology about how to solve it.
Many many thanks !
Steve

AWS Lambda CodePipeline can't fine my deps.json file?

I'm trying to build & deploy a simple sample project to learn AWS. C# / .NET Core.
My buildspec looks like this:
version: 0.2
phases:
install:
runtime-versions:
dotnet: 2.2
pre_build:
commands:
- echo Restore started on `date`
- dotnet restore AWSServerless1.csproj
build:
commands:
- echo Build started on `date`
- dotnet publish -c release -o ./build_output AWSServerless1.csproj
artifacts:
files:
- ./build_output/**/*
- scripts/**/*
- appspec.yml
discard-paths: yes
My appspec looks like this:
version: 0.0
Resources:
- myStack-AspNetCoreFunction-1HPKUEU7I6GFW:
Type: AWS::Lambda::Function
Properties:
Name: "myStack-AspNetCoreFunction-1HPKUEU7I6GFW"
Alias: "AWSServerless1"
CurrentVersion: "1"
TargetVersion: "2"
The pipeline completes successfully, but when I try to run the lambda, I get a 502. I checked the logs and it says:
Could not find the required 'AWSServerless1.deps.json'. This file should be present at the root of the deployment package.: LambdaException
When I download the package from S3, to me, it looks like everything is there. It's a zip file, no paths anywhere, everything is in the root of the zip including AWSServerless1.deps.json.
Any ideas?
Use dotnet lambda package instead of publish
see https://github.com/aws/aws-extensions-for-dotnet-cli

AWS CodeBuild + CodePipeline: "No matching artifact paths found"

I am attempting to get CodePipeline to fetch my code from GitHub and build it with CodeBuild. The first (Source) step works fine. But the second (Build) step fails during the "UPLOAD_ARTIFACTS" part. Here are the relevant log statements:
[Container] 2017/01/12 17:21:31 Assembling file list
[Container] 2017/01/12 17:21:31 Expanding MyApp
[Container] 2017/01/12 17:21:31 Skipping invalid artifact path MyApp
[Container] 2017/01/12 17:21:31 Phase complete: UPLOAD_ARTIFACTS Success: false
[Container] 2017/01/12 17:21:31 Phase context status code: ARTIFACT_ERROR Message: No matching artifact paths found
[Container] 2017/01/12 17:21:31 Runtime error (No matching artifact paths found)
My app has a buildspec.yml in its root folder. It looks like:
version: 0.1
phases:
build:
commands:
- echo `$BUILD_COMMAND`
artifacts:
discard-paths: yes
files:
- MyApp
It would appear that the "MyApp" in my buildspec.yml should be something different, but I'm pouring through all of the AWS docs to no avail (what else is new?). How can I get it to upload the artifact correctly?
The artifacts should refer to files downloaded from your Source action or generated as part of the Build action in CodePipeline. For example, this is from a buildspec.yml I wrote:
artifacts:
files:
- appspec.yml
- target/SampleMavenTomcatApp.war
- scripts/*
When I see that you used MyApp in your artifacts section, it makes me think you're referring to the OutputArtifacts of the Source action of CodePipeline. Instead, you need to refer to the files it downloads and stores there (i.e. S3) and/or it generates and stores there.
You can find a sample of a CloudFormation template that uses CodePipeline, CodeBuild, CodeDeploy, and CodeCommit here: https://github.com/stelligent/aws-codedeploy-sample-tomcat/blob/master/codebuild-cpl-cd-cc.json The buildspec.yml is in the same forked repo.
Buildspec artifacts are information about where CodeBuild can find the build output and how CodeBuild prepares it for uploading to the Amazon S3 output bucket.
For the error "No matching artifact paths found" Couple of things to check:
Artifacts file(s) specified on buildspec.yml file has correct path and file name.
artifacts:
files:
-'FileNameWithPath'
If you are using .gitignore file, make sure file(s) specified on Artifacts section
is not included in .gitignore file.
Hope this helps.
In my case I received this error because I had changed directory in my build stage (the java project I am building is in a subdirectory) and did not change back to the root. Adding cd ..at the end of the build stage did the trick.
I had the similar issue, and the solution to fix the problem was "packaging directories and files inside the archive with no further root folder creation".
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-war-hw.html
Artifacts are the stuff you want from your build process - whether compiled in some way or just files copied straight from the source. So the build server pulls in the code, compiles it as per your instructions, then copies the specified files out to S3.
In my case using Spring Boot + Gradle, the output jar file (when I gradle bootJar this on my own system) is placed in build/libs/demo1-0.0.1-SNAPSHOT.jar, so I set the following in buildspec.yml:
artifacts:
files:
- build/libs/*.jar
This one file appears for me in S3, optionally in a zip and/or subfolder depending on the options chosen in the rest of the Artifacts section
try using the version 0.2 of the buildspec
here is a typical example for nodejs
version: 0.2
phases:
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- npm install
- npm run build
post_build:
commands:
- echo Build completed on
artifacts:
files:
- appspec.yml
- build/*
If you're like me and ran into this problem whilst using Codebuild within a CodePipeline arrangement.
You need to use the following
- printf '[{"name":"container-name-here","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > $CODEBUILD_SRC_DIR/imagedefinitions.json
There was the same issue as #jd96 wrote. I needed to return to the root directory of the project to export artifact.
build:
commands:
- cd tasks/jobs
- make build
- cd ../..
post_build:
commands:
- printf '[{"name":"%s","imageUri":"%s"}]' $IMAGE_REPO_NAME $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json