I was trying to copy the file generated during codebuild to S3 bucket using the cp command. I can able to see the file but when I tried to copy the file it says file not existing. I was still confused why I cant able to copy the file. Please check the Buildspec.yml below.
version: 0.2
phases:
install:
commands:
- echo Installing MySQL
- apt update
- apt-get install mysql-client -y
- mysqldump --version
- mysqldump -h ***** -u $User -p****--no-data --routines --triggers -f testdb > ./backup.sql
- ls
- aws s3 cp backup.sql s3://dev-test --recursive --acl public-read --cache-control "max-age=100"
post_build:
commands:
- echo Build completed on `date`
Please check the logs generated by AWS Codebuild.
Logs:
[Container] 2021/04/26 02:55:41 Running command mysqldump -h ***** -u $User -p****--no-data --routines --triggers -f testdb > ./backup.sql
[Container] 2021/04/26 02:55:43 Running command ls
Jenkinsfile
README.md
backup.sql
buildspec.yml
utils.groovy
[Container] 2021/04/26 02:55:43 Running command aws s3 cp backup.sql s3://dev-test --recursive --acl public-read --cache-control "max-age=100"
warning: Skipping file /codebuild/output/src985236234/src/backup.sql/. File does not exist.
Completed 0 file(s) with ~0 file(s) remaining (calculating...)
[Container] 2021/04/26 02:55:44 Command did not exit successfully aws s3 cp backup.sql s3://dev-test --recursive --acl public-read --cache-control "max-age=100" exit status 2
[Container] 2021/04/26 02:55:44 Phase complete: INSTALL State: FAILED
[Container] 2021/04/26 02:55:44 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: aws s3 cp backup.sql s3://dev-test --recursive --acl public-read --cache-control "max-age=100". Reason: exit status 2```
You are uploading a single file backup.sql, but --recursive will treat is as directory.
It should be:
aws s3 cp backup.sql s3://dev-test --acl public-read --cache-control "max-age=100"
Related
I am new to YAML file. I want to append Timestamp to S3 bucket folder every time so that each build will be unique. In the post_build I append timestamp to S3 bucket as follows. When the codepipeline is triggered all files are stored to S3 bucket Inhouse folder but folder with timestamp is not getting generated. s3://${S3_BUCKET}/Inhouse/${'date'}
Version: 0.2
env:
variables:
S3_BUCKET: Inhouse-market-dev
phases:
install:
runtime-versions:
nodejs: 10
commands:
- npm install
- npm install -g #angular/cli
build:
commands:
- echo Build started on `date`
post_build:
commands:
- aws s3 cp . s3://${S3_BUCKET}/Inhouse/${'date'} --recursive --acl public-read --cache-control "max-age=${CACHE_CONTROL}"
- echo Build completed on `date`
I think your use of ${'date'} is incorrect. I would recommend trying the following to actually get the unix timestamp:
post_build:
commands:
- current_timestamp=$(date +"%s")
- aws s3 cp . s3://${S3_BUCKET}/Inhouse/${current_timestamp} --recursive --acl public-read --cache-control "max-age=${CACHE_CONTROL}"
- echo Build completed on `date` which is ${current_timestamp}
I have 2 AWS accounts. Lets say A and B.
Account A uses CodeBuild to build and upload artifacts to an S3 bucket owned by B. B account has set a ACL permission for the bucket in order to give Write permissions to A.
The artifact file is successfully uploaded to the S3 bucket. However, B account doesnt have any permission over the file, since the file is owned by A.
Account A can change the ownership by running
aws s3api put-object-acl --bucket bucket-name --key key-name --acl bucket-owner-full-control
But this has to be manually run after every build from A account. How can I grant permissions to account B through CodeBuild procedure? Or how can account B override this ownership permission error.
The CodeBuild starts automatically with web-hooks and my buildspec is this:
version: 0.2
env:
phases:
install:
runtime-versions:
java: openjdk8
commands:
- echo Entered the install phase...
build:
commands:
- echo Entered the build phase...
post_build:
commands:
- echo Entered the post_build phase...
artifacts:
files:
- 'myFile.txt'
CodeBuild does not natively support writing artifact to a different account as it does not set proper ACL on the cross account object. This is the reason the following limitation is called out in the CodePipeline documentation:
Cross-account actions are not supported for the following action types:
Jenkins build actions
CodeBuild build or test actions
https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
One workaround is setting the ACL on the artifact yourself in the CodeBuild:
version: 0.2
phases:
post_build:
commands:
- aws s3api list-objects --bucket testingbucket --prefix CFNtest/OutputArti >> $CODEBUILD_SRC_DIR/objects.json
- |
for i in $(jq -r '.Contents[]|.Key' $CODEBUILD_SRC_DIR/objects.json); do
echo $i
aws s3api put-object-acl --bucket testingbucket --key $i --acl bucket-owner-full-control
done
I did it using aws cli commands from the build phase.
version: 0.2
phases:
build:
commands:
- mvn install...
- aws s3 cp my-file s3://bucketName --acl bucket-owner-full-control
I am using the build phase, since post_build will be executed even if the build was not successful.
edit: updated answer with a sample.
I have a Dockerfile that installs awscli and then tries to run aws s3 cp to get a file and put it on the docker image.
My dockerfile is:
FROM my-kie-server:latest
USER root
RUN echo "ip_resolve=4" >> /etc/yum.conf
ENV http_proxy host.docker.internal:9000
ENV https_proxy host.docker.internal:9000
ENV HTTP_PROXY host.docker.internal:9000
ENV HTTPS_PROXY host.docker.internal:9000
RUN yum install -y maven
RUN yum install -y awscli
USER jboss
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN aws s3 cp s3://myBucket/myPath/myFile.jar x.jar
But when I build the image I get this error:
fatal error: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:618)
The command '/bin/sh -c aws s3 cp s3://myBucket/myPath/myFile.jar x.jar' returned a non-zero code: 1
I have tried using --no-verify-ssl on the aws s3 cp command but get the same error.
I've found very little online that mentions this UNKNOWN_PROTOCOL error. Any advice appreciated, thanks.
Actually I am working on a pipeline. So I am having a scenario where I am pushing some artifacts into s3. Now I have wrote a shell script which download the folder and copy each file to its desired location in a wildfly server(Ec2 instance).
#!/bin/bash
mkdir /home/ec2-user/test-temp
cd /home/ec2-user/test-temp
aws s3 cp s3://deploy-artifacts/test-APP test-APP --recursive --region us-east-1
aws s3 cp s3://deploy-artifacts/test-COMMON test-COMMON --recursive --region us-east-1
cd /home/ec2-user/
sudo mkdir -p /opt/wildfly/modules/system/layers/base/psg/common
sudo cp -rf ./test-temp/test-COMMON/standalone/configuration/standalone.xml /opt/wildfly/standalone/configuration
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/com/microsoft/* /opt/wildfly/modules/system/layers/base/com/microsoft/
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/com/mysql /opt/wildfly/modules/system/layers/base/com/
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/psg/common/* /opt/wildfly/modules/system/layers/base/psg/common
sudo cp -rf ./test-temp/test-APP/standalone/deployments/HS.war /opt/wildfly/standalone/deployments
sudo cp -rf ./test-temp/test-APP/bin/resource /opt/wildfly/bin/resource
sudo cp -rf ./test-temp/test-APP/modules/system/layers/base/psg/* /opt/wildfly/modules/system/layers/base/psg/
sudo cp -rf ./test-temp/test-APP/standalone/deployments/* /opt/wildfly/standalone/deployments/
sudo chown -R wildfly:wildfly /opt/wildfly/
sudo service wildfly start
But every time I push new artifacts into s3. I have to go to the server and run this script manually. Is there a way to automate it? I was reading about lamda but after lamda knows the change in s3. where I am gonna define my shell script to run??
Any guidance will be help full.
To Trigger the lambda function on file uploading to s3 bucket, for this you have to set the event notification in s3 bucket.
Steps for setting up s3 event notification:-
1- you lambda and s3 bucket should be in the same region
2 - go to Properties tab of s3 bucket
3 - open up the Event and provide values for event types like put or copy
4 - Do specify the Lambda ARN in Send to option.
Now create one lambda function and add the s3 bucket as a trigger option. Just make sure your Lambda IAM policy is properly set.
I have bucket in the region EU (London) in S3. I am trying to upload a tar file through command line. At the end it throws an error saying so
A client error (PermanentRedirect) occurred when calling the PutObject operation: The bucket you are attempting to access must be addressed using the specified endpoint.Please send all future requests to this endpoint.
I have correctly configured using aws configure by giving correct access key and Region. Can someone shed light into this issue
I have created a script to upload database by creating a tar file
HOST=abc.com
DBNAME=db
BUCKET=s3.eu-west-2.amazonaws.com/<bucketname>/
USER=<user>
stamp=`date +"%Y-%m-%d"`
filename="Mgdb_$stamp.tar.gz"
TIME=`/bin/date +%Y-%m-%d-%T`
DEST=/home/$USER/tmp
TAR=$DEST/../$TIME.tar.gz
/bin/mkdir -p $DEST
echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME";
/usr/bin/mongodump --host $HOST --port 1234 -u "user" -p "pass" --authenticationDatabase "admin" -o $DEST
/bin/tar czvf $TAR -C $DEST .
/usr/bin/aws s3 cp $TAR s3://$BUCKET/$stamp/$filename
/bin/rm -f $TAR
/bin/rm -rf $DEST
Just append the region to the AWS Command-Line Interface (CLI) command:
aws s3 cp file.txt s3://my-bucket/file.txt --region eu-west-2
The format for the S3Uri in your script is incorrect. It should be s3://<bucketname>/<prefix>/<filename>. Then you add the --region option to specify the bucket region.
$BUCKET=<bucketname>
/usr/bin/aws s3 cp $TAR s3://$BUCKET/$stamp/$filename --region eu-west-2