I have a jenkins job which uploads a pretty small bash file(less than <1mb) to s3 bucket. It works most of the time but fails once in a while with the following error:
upload failed: build/xxxxxxx/test.sh The read operation timed out
Above error is directly from the aws cli operation. I am thinking, It could either be some network issue or maybe disk read operation is not available at the time. How do I set the option to retry it if this happens? Also, Is there a timeout I can increase? I searched the cli documentation, googled, and checked out 'aws s3api' but don't see any such an option.
If such an option does not exist.Then, How do folks get around this? Wrap the command to check the error code and reattempt?
End up writing wrapper around s3 command to retry and also get debug stack on last attempt. Might help folks.
# Purpose: Allow retry while uploading files to s3 bucket
# Params:
# \$1 : local file to copy to s3
# \$2 : s3 bucket path
# \$3 : AWS bucket region
#
function upload_to_s3 {
n=0
until [ \$n -gt 2 ]
do
if [ \$n -eq 2 ]; then
aws s3 cp --debug \$1 \$2 --region \$3
return \$?
else
aws s3 cp \$1 \$2 --region \$3 && break
fi
n=\$[\$n+1]
sleep 30
done
}
Related
situation
I did success build on circle-ci and I tried deploying to AWS S3 but
something problem occurs
Goal
Build on ciecle-ci →AWS S3 → AWS cloudfront
this is my repo
error
!/bin/bash -eo pipefail
if ["${develop}"=="master"]
then
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_PRODUCTION} --delete
elif ["${develop}" == "staging"]
then
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_STAGING} --delete
else
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_DEV} --delete
fi
/bin/bash: [==master]: command not found
/bin/bash: line 3: [: missing `]'
fatal error: Parameter validation failed:
Invalid bucket name "{AWS_BUCKET_DEV}": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"
Variable name
AWS_BUCKET_PRODUCTION
AWS_BUCKET_STAGING
AWS_DEV_BUCKET
I used this site for checking my name
https://regex101.com/r/2IFv8B/1
You need to put a $ in front of {AWS_BUCKET_DEV}, {AWS_BUCKET_STAGING}, and {AWS_BUCKET_PRODUCTION} respectively. Otherwise the shell doesn’t replace the variable names with their values.
... s3://${AWS_BUCKET_DEV} ...
When I copy file to S3 bucket with variables by AWS CLI, I got error.
-- command with variables
aws s3 cp %SRC_FILENAME% s3://%S3_BUCKET%/%DEST_FILENAME%
-- error message
FINDSTR: 1 行目は長すぎます。 (original)
FINDSTR: TOO LONG 1st.line (translated)
If I set the source and destication S3 bucket file name without any variable, it ends successfully as usual. And 'aws s3api put-object' command designated by same logic (variables), never encountered same issue.
-- command without variables
aws s3 cp G:\XXX\XXX\XXX\XXX.bak s3://<S3_bucketname>/<TAG>/<FILENAME>
-- s3api command with variables
set S3API_COMMAND_STR=aws s3api put-object --bucket %S3_BUCKET% --key %DEST_FILENAME% --body %SRC_FILENAME% --metadata md5chksum=%SRC_HASH% --content-md5 %SRC_HASH%
I think aws s3api command is more better, but sometimes I need to send over 5GB file, so tentatively should select aws cp command at this point.
I thought this issue can be caused by the length limitation of Windows variables, s3api command with variables should be more longer variables though...
If someone had encountered the same issue, please let me know how you handled. Any advice would be appreciated.
Sincerly.
< Additional Infromation >
Just tentative workaround, once output the command line and execute it, then it worked successfully.
echo %S3_COMMAND_STR% > temp_cmd.bat
call temp_cmd.bat
But still not sure why this findstr error occurred on aws cli, so any information would be appreciated.
Just FYI.
Still not sure why AWS CLI would accept it, but following either of workaround goes successfully, too.
aws s3 cp %SRC_FILENAME% s3://%S3_BUCKET%/%DEST_FILENAME% & if ErrorLevel 1 goto ERR_S3_UPLOAD
or
set S3_COMMAND_STR=aws s3 cp %SRC_FILENAME% s3://%S3_BUCKET%/%DEST_FILENAME%
%S3_COMMAND_STR% & if ErrorLevel 1 goto ERR_S3_UPLOAD
When I run, AWS S3 SYNC "local drive" "S3bucket", I see bunch of logs getting generated on my aws cli console. Is there a way to direct these logs to an output/log file for future reference?
I am trying to schedule a sql job which executes the powershell script that syncs backup from local drive to S3 bucket. Backups are getting synched to the bucket successfully. However, I am trying to figure out a way to direct the sync progress to an output file. Help appreciated. Thanks!
Simply pipe the output of the command into a file using the ">" symbol.
The file does not have to exist before hand (and in-fact will be overwritten if it does exist).
aws s3 sync . s3://mybucket > log.txt
If you wish to append to the given file then use the following operator: ">>".
aws s3 sync . s3://mybucket >> existingLogFile.txt
To test this command, you can use the --dryrun argument to the sync command:
aws s3 sync . s3://mybucket --dryrun > log.txt
Windows server 12r2 with python 2.7.10 and the aws cli tool installed. The following works:
aws s3 cp c:\a\a.txt s3://path/
I can upload that file without problem. What I want to do is upload a file from a mapped drive to an s3 bucket, so I tried this:
aws s3 cp s:\path\file s3://path/
and it works.
Now what I want to do and cannot figure out is how to not specify, but let it grab all file(s) so I can schedule this to upload the contents of a directory to my s3 bucket. I tried this:
aws s3 cp "s:\path\..\..\" s3://path/ --recursive --include "201512"
and I get this error "TOO FEW ARGUMENTS"
Nearest I can guess it's mad I'm not putting a specific file to send up, but I don't want to do that, I want to automate all things.
If someone could please shed some light on what I'm missing I would really appreciate it.
Thank you
In case this is useful for anyone else coming after me: Add some extra spaces between the source and target. I've been beating my head against running this command with every combination of single quotes, double quotes, slashes, etc:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv"
And it would give me: "aws: error: too few arguments" Every. Single. Way. I. Tried.
So finally saw the --debug option in aws s3 cp help
so ran it again this way:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv" --debug
And this was the relevant debug line:
MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['s3', 'cp', 'home/<username>/folder\xc2\xa0s3://<bucketID>/<username>/archive/', '--recursive', '--exclude', '*', '--include', '*.csv', '--debug']
I have no idea where \xc2\xa0 came from in between source and target, but there it is! Updated the line to add a couple extra spaces and now it runs without errors:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv"
aws s3 cp "s:\path\..\..\" s3://path/ --recursive --include "201512"
TOO FEW ARGUMENTS
This is because, in you command, double-quote(") is escaped with backslash(\), so local path(s:\path\..\..\) is not parsed correctly.
What you need to do is to escape backslash with double backslashes, i.e. :
aws s3 cp "s:\\path\\..\\..\\" s3://path/ --recursive --include "201512"
Alternatively you can try 'mc' which comes as single binary is available for windows both 64bit and 32bit. 'mc' implements mirror, cp, resumable sessions, json parseable output and more - https://github.com/minio/mc
64-bit from https://dl.minio.io/client/mc/release/windows-amd64/mc.exe
32-bit from https://dl.minio.io/client/mc/release/windows-386/mc.exe
Use aws s3 sync instead of aws s3 cp to copy the contents of a directory.
I faced the same situation. Let share two scenarios I had tried to check the same code.
Within bash
please make sure you have AWS profile in place (use $aws configure). Also, make sure you use a proper proxy if applicable.
$aws s3 cp s3://bucket/directory/ /usr/home/folder/ --recursive --region us-east-1 --profile yaaagy
it worked.
Within a perl script
$cmd="$aws s3 cp s3://bucket/directory/ /usr/home/folder/ --recursive --region us-east-1 --profile yaaagy"
I enclosed it within "" and it was successful. Let me know if this works out for you.
I ran into this same problem recently, and quiver's answer -- replacing single backslashes with double backslashes -- resolved the problem I was having.
Here's the Powershell code I used to address the problem, using the OP's original example:
# Notice how my path string contains a mixture of single- and double-backslashes
$folderPath = "c:\\a\a.txt"
echo "`$folderPath = $($folderPath)"
# Use the "Resolve-Path" cmdlet to generate a consistent path string.
$osFolderPath = (Resolve-Path $folderPath).Path
echo "`$osFolderPath = $($osFolderPath)"
# Escape backslashes in the path string.
$s3TargetPath = ($osFolderPath -replace '\\', "\\")
echo "`$s3TargetPath = $($s3TargetPath)"
# Now pass the escaped string to your AWS CLI command.
echo "AWS Command = aws s3 cp `"s3://path/`" `"$s3TargetPath`""
This is probably easy but it's really stumping me. I literally have about 9 hours experience with Amazon AWS and CLI.
I have a directory
BDp-Archive/item/
on my S3 and I want to copy the text files in that directory into its sub directory called
BDp-Archive/item/txt/
My attempted command was:
aws s3 mv s3://Bdp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/ s3://BDp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/txt/ --include "*.txt"
This is throwing the error:
A client error (NoSuchKey) occurred when calling the HeadObject operation: Key "
00009e98-3e0f-402e-9d12-7aec8e32b783" does not exist
Completed 1 part(s) with ... file(s) remaining
I think the problem is that you need to use the --recursive switch, since by default, the mv command only applies to a single object (much like the other commands - rm, sync, etc...). try:
aws s3 mv s3://Bdp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/ s3://BDp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/txt/ --include "*.txt" --recursive
I needed to configure the region of my bucket (or specify it as part of the cli command
aws s3 cp --region <region> <from> <to>
You need to configure your access keys and secret key, try:
aws configure
For more options, see: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-installing-credentials