situation
I did success build on circle-ci and I tried deploying to AWS S3 but
something problem occurs
Goal
Build on ciecle-ci →AWS S3 → AWS cloudfront
this is my repo
error
!/bin/bash -eo pipefail
if ["${develop}"=="master"]
then
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_PRODUCTION} --delete
elif ["${develop}" == "staging"]
then
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_STAGING} --delete
else
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_DEV} --delete
fi
/bin/bash: [==master]: command not found
/bin/bash: line 3: [: missing `]'
fatal error: Parameter validation failed:
Invalid bucket name "{AWS_BUCKET_DEV}": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"
Variable name
AWS_BUCKET_PRODUCTION
AWS_BUCKET_STAGING
AWS_DEV_BUCKET
I used this site for checking my name
https://regex101.com/r/2IFv8B/1
You need to put a $ in front of {AWS_BUCKET_DEV}, {AWS_BUCKET_STAGING}, and {AWS_BUCKET_PRODUCTION} respectively. Otherwise the shell doesn’t replace the variable names with their values.
... s3://${AWS_BUCKET_DEV} ...
Related
name: Deploy to AWS S3
command: |
echo aws --version
if [ "${CURRENT_BRANCH}" == "main" ]
then
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://${AWS_BUCKET_PRODUCTION} --delete
elif [ "${CURRENT_BRANCH}" == "staging" ]
then
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://${AWS_BUCKET_STAGING} --delete
else
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://${AWS_BUCKET_DEV} --delete
fi
Giving on these commands no idea, I ran command on my window machine seems to run fine, giving problem on CI machine.
Stupid mistake I had misspelled REIGION as this in environment variables.
I have a jenkins job which uploads a pretty small bash file(less than <1mb) to s3 bucket. It works most of the time but fails once in a while with the following error:
upload failed: build/xxxxxxx/test.sh The read operation timed out
Above error is directly from the aws cli operation. I am thinking, It could either be some network issue or maybe disk read operation is not available at the time. How do I set the option to retry it if this happens? Also, Is there a timeout I can increase? I searched the cli documentation, googled, and checked out 'aws s3api' but don't see any such an option.
If such an option does not exist.Then, How do folks get around this? Wrap the command to check the error code and reattempt?
End up writing wrapper around s3 command to retry and also get debug stack on last attempt. Might help folks.
# Purpose: Allow retry while uploading files to s3 bucket
# Params:
# \$1 : local file to copy to s3
# \$2 : s3 bucket path
# \$3 : AWS bucket region
#
function upload_to_s3 {
n=0
until [ \$n -gt 2 ]
do
if [ \$n -eq 2 ]; then
aws s3 cp --debug \$1 \$2 --region \$3
return \$?
else
aws s3 cp \$1 \$2 --region \$3 && break
fi
n=\$[\$n+1]
sleep 30
done
}
I have bucket in the region EU (London) in S3. I am trying to upload a tar file through command line. At the end it throws an error saying so
A client error (PermanentRedirect) occurred when calling the PutObject operation: The bucket you are attempting to access must be addressed using the specified endpoint.Please send all future requests to this endpoint.
I have correctly configured using aws configure by giving correct access key and Region. Can someone shed light into this issue
I have created a script to upload database by creating a tar file
HOST=abc.com
DBNAME=db
BUCKET=s3.eu-west-2.amazonaws.com/<bucketname>/
USER=<user>
stamp=`date +"%Y-%m-%d"`
filename="Mgdb_$stamp.tar.gz"
TIME=`/bin/date +%Y-%m-%d-%T`
DEST=/home/$USER/tmp
TAR=$DEST/../$TIME.tar.gz
/bin/mkdir -p $DEST
echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME";
/usr/bin/mongodump --host $HOST --port 1234 -u "user" -p "pass" --authenticationDatabase "admin" -o $DEST
/bin/tar czvf $TAR -C $DEST .
/usr/bin/aws s3 cp $TAR s3://$BUCKET/$stamp/$filename
/bin/rm -f $TAR
/bin/rm -rf $DEST
Just append the region to the AWS Command-Line Interface (CLI) command:
aws s3 cp file.txt s3://my-bucket/file.txt --region eu-west-2
The format for the S3Uri in your script is incorrect. It should be s3://<bucketname>/<prefix>/<filename>. Then you add the --region option to specify the bucket region.
$BUCKET=<bucketname>
/usr/bin/aws s3 cp $TAR s3://$BUCKET/$stamp/$filename --region eu-west-2
I'm struggling to write the proper command to copy files from an EC2's current directory to S3. For my tests, I've been running these commands:
echo 'first' >> first.csv
echo 'second' >> second.csv
echo 'third' >> third.csv
ls
aws s3 cp . s3://bucketname/sub
The script is being run via datapipeline so I can see ls being executed. However, when it hits the last line, there's some error output. Specifically, the error output is:
upload failed: ./ to s3://bucketname/sub/ [Errno 21] Is a directory: u'/mnt/taskRunner/'
What would be the correct path name or directory to provide in the first argument after cp?
Try adding the --recursive flag to your aws s3 cp command;
aws s3 cp . s3://bucketname/sub --recursive
From the manual/info page for the s3 cp command:
--recursive (boolean) Command is performed on all files or objects
under the specified directory or prefix.
In general for the AWS CLI you can use aws <command> help and aws <command> <subcommand> help for command manual/info pages. (In this case: aws s3 cp help)
This is probably easy but it's really stumping me. I literally have about 9 hours experience with Amazon AWS and CLI.
I have a directory
BDp-Archive/item/
on my S3 and I want to copy the text files in that directory into its sub directory called
BDp-Archive/item/txt/
My attempted command was:
aws s3 mv s3://Bdp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/ s3://BDp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/txt/ --include "*.txt"
This is throwing the error:
A client error (NoSuchKey) occurred when calling the HeadObject operation: Key "
00009e98-3e0f-402e-9d12-7aec8e32b783" does not exist
Completed 1 part(s) with ... file(s) remaining
I think the problem is that you need to use the --recursive switch, since by default, the mv command only applies to a single object (much like the other commands - rm, sync, etc...). try:
aws s3 mv s3://Bdp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/ s3://BDp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/txt/ --include "*.txt" --recursive
I needed to configure the region of my bucket (or specify it as part of the cli command
aws s3 cp --region <region> <from> <to>
You need to configure your access keys and secret key, try:
aws configure
For more options, see: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-installing-credentials