AWS S3 upload fails: RequestTimeTooSkewed - amazon-web-services

I'm using
aws s3 sync ~/folder/ s3:// --delete
to upload (and sync) a large number of files to an S3 bucket. Some - but not all - of the files fail, throwing this error message:
upload failed: to s3://bucketname/folder/
A client error (RequestTimeTooSkewed) occurred when calling the UploadPart operation: The difference between the request time and the current time is too large
I know that the cause of this error is usually a local time that's out of sync with Internet time, but I'm running NTP (on my Ubuntu PC) and the date/time seem absolutely accurate - and this error has only been reported for about 15 out of the forty or so files I've uploaded so far.
Some of the files are relatively large - up to about 70MB each - and my upload speeds aren't fantastic: could S3 possibly be comparing the initial and completion times and reporting their difference as an error?
Thanks,

The time verification happens at the start of your upload to S3, so it won't be to do with files taking too long to upload.
Try comparing your system time with what S3 is reporting and see if there is any unnecessary time drift, just to make sure:
# Time from Amazon
$ curl http://s3.amazonaws.com -v
# Time on your local machine
$ date -u
(Time is returned in UTC)

I was running aws s3 cp inside docker container on a MacBook Pro and got this error. Restart the Docker for Mac fixed this issue.

Amazon S3 uses NTP for its system clocks, to sync with your clock.
Run
sudo apt-get install ntp
then open /etc/ntp.conf and add at the bottom
server 0.amazon.pool.ntp.org iburst
server 1.amazon.pool.ntp.org iburst
server 2.amazon.pool.ntp.org iburst
server 3.amazon.pool.ntp.org iburst
Then run
sudo service ntp restart

It now seems that multipart uploads were failing on aws s3. Using s3cmd instead works perfectly.

You have to sync you local time on your machine. The time is out of world time.

I'm having the issue on MacOS.
I fixed it by
Preference -> Date & Time -> check the box "Set date and time automatically"

Restarting the machine fixed this issue for me.

In cmd
Give aws configure
Set your default region name = us-east-1
Whatever it may be but there should be any ,
But not none
Default region name [none] -->> ×××××
Default region name [us-east-1] -->> √√
Create a bucket via GUI in the aws website and check its the time of creation
In the creation date
Note down that date and time from aws
and set the date and time of your pc with as same (which you have noted down)from settings in your pc
And now try to give the command in cmd
-> aws s3 ls

Related

Google Cloud Ops Agent

I am having an issue where Google Cloud Ops Agent logging gathers a lot of data and fills up my entire debian server hard drive in about 3 weeks due to the ever increasing size of the log file.
I do not want to increase the size of my server hard drive.
Does anyone know how to configure Google Cloud Ops Agent so that it only retains log data for the previous 7 days ?
EDIT: Google Cloud Ops Agent log file is stored in directory below
/var/log/google-cloud-ops-agent/subagents/logging-module.log
I faced the same issue recently while using agent 2.11.0. And it's not just an enormous log file, it's also a ridiculous CPU usage! Check it out in htop.
If you open the log file you'll see it spamming errors about buffer chunks. Apparently, they got broken smh, so the agent can't read them and send away. Thus, high IO and CPU usage.
The solution is to stop the service:
sudo service google-cloud-ops-agent stop
Then clear all buffer chunks:
sudo rm -rf /var/lib/google-cloud-ops-agent/fluent-bit/buffers/
And delete log file if you want:
sudo rm -f /var/log/google-cloud-ops-agent/subagents/logging-module.log
Then start the agent:
sudo service google-cloud-ops-agent start
This helped me out.
Btw this issue is described here and it seems that Google "fixed" it since 2.7.0-1. Whatever they mean by it since we still faced it...

Dynamodb local web shell does not load

I am running DynamoDB locally using the instructions here. To remove potential docker networking issues I am using the "Download Locally" version of the instructions. Before running dynamo locally I run aws configure to set some fake values for AWS access, secret, and region, and here is the output:
$ aws configure
AWS Access Key ID [****************fake]:
AWS Secret Access Key [****************ake2]:
Default region name [local]:
Default output format [json]:
here is the output of running dynamo locally:
$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
I can confirm that the DynamoDB is running locally successfully by listing tables using aws cli
$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": []
}
but when I visit http://localhost:8000/shell in my browser, this is the error I get and the page does not load.
I tried running curl on the shell to see if I can get a more useful error message:
$ curl http://localhost:8000/shell
{
"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken",
"Message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}%
I tried looking up the error above, but I don't have much choice in doing setup when running the shell merely in the browser. Any help is appreciated on how I can run the Dynamodb javascript web shell with this setting.
Software versions:
aws cli: aws-cli/2.4.7 Python/3.9.9 Darwin/20.6.0 source/x86_64 prompt/off
OS: MacOS Big Sur 11.6.2 (20G314)
DynamoDB Local Web Shell was deprecated with version 1.16.X and is not available any longer from 1.17.X to latest. There are no immediate plans for a new Web Shell to be introduced.
You can download an old version of DynamoDB Local < 1.17.X should you wish to use the Web Shell.
Available versions:
aws s3 ls s3://dynamodb-local-frankfurt/
Download most recent working version with Web Shell:
aws s3 ls s3://dynamodb-local-frankfurt/dynamodb_local_2021-04-27.tar.gz .
The next release of DynamoDB Local will have an updated README indicating its deprecation
As I answered in DynamoDB local http://localhost:8000/shell this appears to be a regression in new versions of DynamoDB Local, where the shell mysteriously stopped working, whereas in versions from a year ago it does work.
Somebody should report it to Amazon. If there is some flag that new versions require you to set to enable the shell, it isn't documented anywhere that I can find.
Update JAVA to the latest version and voila, it works!

Amazon S3 cp keeps stopping partway through a copy procees

I have an overnight scheduled task (batch script) on a WS2K3 server which copies a ~2GB zip file to an S3 bucket for archival purposes.
I'm using AWS CLI v1 which I appreciate isn't ideal (nor is the fact the OS is out of date and unsupported). I had enough problems coercing AWS CLI into actually running on the server! (it's a legacy system the client refuses to update).
The copy process appears to have problems. I suspect there are intermittent connectivity isses at the client's site and it's causing the copy process to exit. I've added a debug argument to the command but for some reason the debug info isn't being echoed to the logfile so I can't identify the specific reason why it keeps exiting. All I can do is retry until the copy completes.
The S3 upload section of the script is below
:s3cp
aws configure set default.s3.max_concurrent_requests 1
#echo S3 Backup Started: %date% %time%
aws s3 cp %backup%.zip %bucket% --debug
IF %ERRORLEVEL% NEQ 0 goto:s3cp
#echo S3 Backup Finished: %date% %time%
goto exit
I've reduced concurrent connections in a attempt to restrict the resource overhead for this routine (the server struggles with most tasks!) but it's had no effect.
The last time it ran, the upload started at 20:04, restarted 23 times before finally completing a successful upload nearly 6 hours after the first copy process was called.
Are there any additional arguments I can pass to the AWS CLI to cope with timeouts and connection unreliability to ensure the process doesn't exit immediately?

Gsutil Terminal Command not actually updating files in the Cloud Console

Linux Distro: Pop!_OS 19.10 Ubuntu
For context, I'm hosting a static website through Google Cloud Bucket.
So I tried executing the command
gsutil -m cp -r ejsout gs://evaapp.xyz
to push to my storage bucket.
The command copies all the files successfully, returning
/ [333/333 files][ 19.7 MiB/ 19.7 MiB] 100% Done 998.4 KiB/s ETA 00:00:00
Operation completed over 333 objects/19.7 MiB.
but when I go and look at the bucket online, the files aren't overwritten.
I've waited for hours or days and nothing happens to the files on Google Cloud Console. Only new files that haven't been created show up in the bucket console but not existing files. I have to manually update the files through cloud console for it to update.
Anyway to fix this? Feel free to close this issue because I'm bad at this and I couldn't find any helpful documentation on this. It's just annoying, I want to do the push from the terminal. Thanks!

zfs send of 7TB snapshot to S3 upload results in "An error occurred (Unknown) when calling the CompleteMultipartUpload operation)

I have a CentOS 7.4.1708 server that I am attempting to backup via AWS S3. Kernel is 3.10.0-693.17.1.el7. Since the filesystem I'm trying to backup is ZFS and I have it on scheduled snapshots, I believed I could do a ZFS send to S3 and backup my files that way.
This is command I attempted:
zfs send <snapshot> | aws s3 cp --expected-size=$((1024*1024*1024 * 7000)) - s3://s3bucket/snapshot image name
The --expected-size corresponds to 7.5 TB. My snapshot image is around 7.35TB. I had thought this command would allow me to send my image to S3 but this has unfortunately resulted in an error message after running fine for a few days:
upload failed: - to s3://home-data/nfs-home#zfs-auto-snap_monthly-2018-06-14-1004.img
An error occurred (Unknown) when calling the CompleteMultipartUpload operation (reached max retries: 4): Unknown
Does anyone know what I have to do differently to avoid this error?