I am new to YAML file. I want to append Timestamp to S3 bucket folder every time so that each build will be unique. In the post_build I append timestamp to S3 bucket as follows. When the codepipeline is triggered all files are stored to S3 bucket Inhouse folder but folder with timestamp is not getting generated. s3://${S3_BUCKET}/Inhouse/${'date'}
Version: 0.2
env:
variables:
S3_BUCKET: Inhouse-market-dev
phases:
install:
runtime-versions:
nodejs: 10
commands:
- npm install
- npm install -g #angular/cli
build:
commands:
- echo Build started on `date`
post_build:
commands:
- aws s3 cp . s3://${S3_BUCKET}/Inhouse/${'date'} --recursive --acl public-read --cache-control "max-age=${CACHE_CONTROL}"
- echo Build completed on `date`
I think your use of ${'date'} is incorrect. I would recommend trying the following to actually get the unix timestamp:
post_build:
commands:
- current_timestamp=$(date +"%s")
- aws s3 cp . s3://${S3_BUCKET}/Inhouse/${current_timestamp} --recursive --acl public-read --cache-control "max-age=${CACHE_CONTROL}"
- echo Build completed on `date` which is ${current_timestamp}
Related
I'm trying do a deploy in Ohio from São Paulo, I config the buildspec and the conf on .elasticbeanstalk to set a variable to received us-east as parameter.
I did many tryings to do this work but always the error "EXIT STATUS 4" show. This is my last attempt
COMMAND_EXECUTION_ERROR: Error while executing command: eb deploy Logoneagendamento-teste --region us-east-2. Reason: exit status 4
And the buildspec.yml is as follows
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
commands:
- pip install --upgrade awsebcli awscli
build:
commands:
- echo Iniciando build...
- mvn package
- echo eb list --region
- eb list --region us-east-2
- echo Inciando deploy
- eb deploy $DEPLOY_ENV -r $AWS_DEFAULT_REGION
post_build:
commands:
#- command
#- command artifacts: files: - 'target/LogOne-Agendamento.jar'
# - location #name: $(date +%Y-%m-%d) discard-paths: yes –
I was trying to copy the file generated during codebuild to S3 bucket using the cp command. I can able to see the file but when I tried to copy the file it says file not existing. I was still confused why I cant able to copy the file. Please check the Buildspec.yml below.
version: 0.2
phases:
install:
commands:
- echo Installing MySQL
- apt update
- apt-get install mysql-client -y
- mysqldump --version
- mysqldump -h ***** -u $User -p****--no-data --routines --triggers -f testdb > ./backup.sql
- ls
- aws s3 cp backup.sql s3://dev-test --recursive --acl public-read --cache-control "max-age=100"
post_build:
commands:
- echo Build completed on `date`
Please check the logs generated by AWS Codebuild.
Logs:
[Container] 2021/04/26 02:55:41 Running command mysqldump -h ***** -u $User -p****--no-data --routines --triggers -f testdb > ./backup.sql
[Container] 2021/04/26 02:55:43 Running command ls
Jenkinsfile
README.md
backup.sql
buildspec.yml
utils.groovy
[Container] 2021/04/26 02:55:43 Running command aws s3 cp backup.sql s3://dev-test --recursive --acl public-read --cache-control "max-age=100"
warning: Skipping file /codebuild/output/src985236234/src/backup.sql/. File does not exist.
Completed 0 file(s) with ~0 file(s) remaining (calculating...)
[Container] 2021/04/26 02:55:44 Command did not exit successfully aws s3 cp backup.sql s3://dev-test --recursive --acl public-read --cache-control "max-age=100" exit status 2
[Container] 2021/04/26 02:55:44 Phase complete: INSTALL State: FAILED
[Container] 2021/04/26 02:55:44 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: aws s3 cp backup.sql s3://dev-test --recursive --acl public-read --cache-control "max-age=100". Reason: exit status 2```
You are uploading a single file backup.sql, but --recursive will treat is as directory.
It should be:
aws s3 cp backup.sql s3://dev-test --acl public-read --cache-control "max-age=100"
I have 2 AWS accounts. Lets say A and B.
Account A uses CodeBuild to build and upload artifacts to an S3 bucket owned by B. B account has set a ACL permission for the bucket in order to give Write permissions to A.
The artifact file is successfully uploaded to the S3 bucket. However, B account doesnt have any permission over the file, since the file is owned by A.
Account A can change the ownership by running
aws s3api put-object-acl --bucket bucket-name --key key-name --acl bucket-owner-full-control
But this has to be manually run after every build from A account. How can I grant permissions to account B through CodeBuild procedure? Or how can account B override this ownership permission error.
The CodeBuild starts automatically with web-hooks and my buildspec is this:
version: 0.2
env:
phases:
install:
runtime-versions:
java: openjdk8
commands:
- echo Entered the install phase...
build:
commands:
- echo Entered the build phase...
post_build:
commands:
- echo Entered the post_build phase...
artifacts:
files:
- 'myFile.txt'
CodeBuild does not natively support writing artifact to a different account as it does not set proper ACL on the cross account object. This is the reason the following limitation is called out in the CodePipeline documentation:
Cross-account actions are not supported for the following action types:
Jenkins build actions
CodeBuild build or test actions
https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
One workaround is setting the ACL on the artifact yourself in the CodeBuild:
version: 0.2
phases:
post_build:
commands:
- aws s3api list-objects --bucket testingbucket --prefix CFNtest/OutputArti >> $CODEBUILD_SRC_DIR/objects.json
- |
for i in $(jq -r '.Contents[]|.Key' $CODEBUILD_SRC_DIR/objects.json); do
echo $i
aws s3api put-object-acl --bucket testingbucket --key $i --acl bucket-owner-full-control
done
I did it using aws cli commands from the build phase.
version: 0.2
phases:
build:
commands:
- mvn install...
- aws s3 cp my-file s3://bucketName --acl bucket-owner-full-control
I am using the build phase, since post_build will be executed even if the build was not successful.
edit: updated answer with a sample.
How can I remove unwanted files in an S3 bucket as the output of a pipeline in CodePipeline, using CodeBuild's buildspec.yml file?
For example:
The build folder of a GitHub repo is put in the designated S3 bucket so the bucket can be used as a static website.
I pushed a file earlier to the bucket which I don't need anymore. How do I use the buildspec.yml file to "clean" the bucket before pushing the artifacts of my pipeline to the bucket?
An example buildspec.yml file:
version: 0.2
phases:
build:
commands:
- mkdir build-output
- find . -type d -name public -exec cp -R {} build-output \;
- find . -mindepth 1 -name build-output -prune -o -exec rm -rf {} +
post_build:
commands:
- mv build-output/**/* ./
- mv build-output/* ./
- rm -R build-output
artifacts:
files:
- '**/*'
Should the command:
rm -rf *
in build phase like this?
build:
commands:
- aws s3 rm s3://mybucket/ --recursive
And how do I reference the right bucket instead of hardcoding the name in the file?
To delete the files in the S3 bucket, you can use the aws s3 rm --recursive command as you already alluded to.
You can pass in the bucket name from the pipeline to CodeBuild by setting it in the environment variable.
ArtifactsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-artifacts
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Environment:
EnvironmentVariables:
- Name: ARTIFACTS_BUCKET
Value: !Ref ArtifactsBucket
Type: PLAINTEXT
In the buildspec, you can then refer to the ARTIFACTS_BUCKET env var, for example:
build:
commands:
- aws s3 rm --recursive "s3://${ARTIFACTS_BUCKET}/"
An alternative approach you could take is to declare lifecycle management on the bucket. For example, you can say "delete all objects after 30 days" like so:
ArtifactsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-artifacts
LifecycleConfiguration:
Rules:
- ExpirationInDays: 30
Id: Expire objects in 30 days
Status: Enabled
I'm trying to get AWS CodePipeline working with S3 source, CodeBuild and Elastic Beanstalk (nodejs environment)
My problem lies between CodeBuild and Beanstalk.
I have CodeBuild outputting a zip file of the final nodeJS app via the artifacts. Here is my CodeBuild buildspec.yml
version: 0.1
phases:
install:
commands:
- echo Installing Node Modules...
- npm install -g mocha
- npm install
post_build:
commands:
- echo Performing Test
- npm test
- zip -r app-api.zip .
artifacts:
files:
- app-api.zip
When I manually run CodeBuild it successfully puts the zip into S3. When I run CodePipeline it puts the zip on each Elastic Beanstalk instance in /var/app/current as app-api.zip
What I would like is for it to extract app-api.zip as /var/app/current. Just like the manual deploy via the Elastic Beanstalk console interface.
First, a quick explanation. CodePipeline sends whatever files you specified as artifacts to Elastic Beanstalk. In your case, you are sending app-api.zip
What you probably want to do instead, is to send all the files, but not wrap them in a ZIP.
Let's change your buildspec.yml to not create app-api.zip and instead, send the raw files to CodePipeline.
version: 0.1
phases:
install:
commands:
- echo Installing Node Modules...
- npm install -g mocha
- npm install
post_build:
commands:
- echo Performing Test
- npm test
# - zip -r app-api.zip . **<< Remove this line**
artifacts:
files:
- '**/*'
# Replace artifacts/files with the value shown above