I'm having issues uploading files, that take a bit of time, using presigned put object urls together with transfer acceleration. Sometimes it works and sometimes it doesn't.
I've performed the following tests using Java SDK 2.15.9 to generate urls and curl to upload. Also I'm uploading from Sweden to an s3 bucket located in the us-east-2 region using transfer acceleration.
file size
url expire time
transfer speed
time to upload
status
20mb
1min
100k/s
x
failed after 1min 23sec on 403 forbidden
42mb
10min
50k/s
13min
success
42mb
10min
10k/s
x
failed after 12min on 403 forbidden
42mb
10min
25k/s
28min
success
What is going on here? First theory of mine was that the expire time needed to be longer than the upload time. However reading through the docs it seems that expire time is validated on start of request. Is that true when transfer acceleration is enabled as well? Also using an expire time of 10min worked even tho one upload took 28min.
Do I need to set a longer expire time?
Used the following curl command:
curl -v -H "Content-Type: $contentType" --limit-rate $rateLimit --upload-file $file "$url"
Code to generate URL:
private val presigner by lazy {
S3Presigner.builder()
.credentialsProvider(DefaultCredentialsProvider.create())
.region(s3Region)
.serviceConfiguration(S3Configuration.builder()
.accelerateModeEnabled(true)
.checksumValidationEnabled(false)
.build())
.build()
}
override fun run() {
val url = presigner.presignPutObject { builder ->
builder.putObjectRequest {
it.bucket(s3Bucket)
it.key(UUID.randomUUID().toString())
it.contentType(contentType)
}.signatureDuration(Duration.ofSeconds(expire))
}.url()
println(url)
}
Got a response from AWS support that Cloudfront does not send the request to S3 until it has received the entire body. So the resolution is to increase the expire time of the upload URL.
Related
I execute the command: aws s3 ls and got the following error message:
An error occurred (RequestTimeTooSkewed) when calling the ListBuckets operation: The difference between the request time and the current time is too large.
Please advise.
If you're using WSL, you can run wsl --shutdown in CMD or PowerShell. This ensures the next time you start a WSL session, it cold boots and fixes the time.
https://github.com/microsoft/WSL/issues/4245
AWS API requests are 'signed' and part of the information exchanged is a timestamp. If the timestamp is more than 900 seconds old the request will be rejected.
This is done to prevent "replay attacks" where old requests are sent again.
You can fix this by correcting the Date and Time on the system where you are sending the request.
Bit of an odd case where my API is sent a presigned URL to write to, and a presigned URL to download the file from.
The problem is that if they send a very large file, the presigned url we need to write to can expire before it gets to that step (some processing happens in between the read/write).
Is it possible to 'open' the connection for writing early to make sure it doesn't expire, and then start writing once the earlier process is done? Or maybe there is a better way of handling this.
The order goes:
Receive API request with a downloadUrl and an uploadUrl
Download the file
Process the file
Upload the file to the uploadUrl
TL;DR: How can I ensure the url for #4 doesn't expire before I get to it?
When generating the pre-signed URL, you have complete control over the time duration. For example, this Java code shows how to set the time when creating a GetObjectPresignRequest object:
GetObjectPresignRequest getObjectPresignRequest = GetObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(10))
.getObjectRequest(getObjectRequest)
.build();
So you can increase the time limit in such situations.
From the SDK documentation (link: https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.S3.S3Client.html#_createPresignedRequest) the $expires parameter should denote the time at which the URL should expire.
So if I specified 2 minutes as an expiration time, after 2 minutes the URL should be invalid. My code looks like this
<?php
$s3 = $this->cloudProvider->getClient(); // S3 client
$cmd = $s3->getCommand(
'GetObject',
[
'Bucket' => $this->getSdkBucket(), // Bucket name
'Key' => "$s3Name",
]
);
$urlReq = $s3->createPresignedRequest($cmd, $expirationTime); // $expirationTime is a Unix timestamp
And I get the url that has the correct expiry time (in my case the client wanted it to be the session expiry time, and session time is 4 hours)
X-Amz-Content-Sha256=UNSIGNED-PAYLOAD
&X-Amz-Security-Token=long_string_goes_here
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=another_string_goes_here
&X-Amz-Date=20200907T110127Z
&X-Amz-SignedHeaders=host
&X-Amz-Expires=14400 // This decreases depending on how long the user is logged in - max 4hrs
&X-Amz-Signature=another_string_here
The problem is that this url will be valid after 4 hours.
From what I've read in this answer about expiry time (https://stackoverflow.com/a/57792699/629127), the URL will be valid from 6 hours up to 7 days, depending on the credentials used to generate it.
And I'm using the IAM credentials (ECS provider).
So does this means that no matter what value I put in the expiry variable, I won't be able to make the link valid for that time period?
The expiration time is checked by S3 and if the browser or a proxy (if you are using CloudFront, for example) has the file cached then the request won't reach S3. If it is cached in the browser then don't worry, that means the same user can access the URL after expiration only.
You can use the browser devtools to check for this. On the network tab, find the request to the signed URL and you can see if it was returned from the cache (from memory/disk cache).
I am attempting to generate a presigned url that only allows one visit/use of the url.
I have been trying just using an expire time but from what I have tested anything under 70 seconds always gives an expired url error.
aws s3 presign s3://bucket/object --expires-in 70
The other alternative would be a short url expire time (e.g. 5 seconds) but I cannot get anything under 70 seconds to work without an expired url error.
If anything under 70 seconds gives you an error, your clock is almost certainly wrong on the machine where you are generating the signed URL.
Expiration is calculated as --expires-in seconds in the future relative to the clock on the machine where you are running aws-cli. There's an assumption that this is a trusted environment (your credentials are there) and the clock is also trusted to have been set accurately.
(The clock on the machine where the browser is being used to access the URL doesn't matter.)
Note that the fixed expiration time associated with a given URL is shown in the error message.
<RequestTime>Mon, 14 Mar 2011 10:09:28 GMT</RequestTime>
<ServerTime>2011-03-14T09:09:29Z</ServerTime></Error>
reason: The reason of this problem is that Amazon S3 allows only a small time stamp variation up to 15 minutes between the server and its requesting client (user pc). As Amazon is a big backup server of large number of users, security does matter a lot.
solution: I installed ntp on my ubuntu machine and try to sync it with s3. But still throwing same error.
How can I solved it.
My project is in Django
Make sure you use UTC time for your requests. From the AWS docs:
Request Elements
Time stamp—Each request must contain the date and time the request
was created, represented as a string
in UTC.
I had the same problem: Update your date with the following:
rdate -s ntp.xs4all.nl
substitute with whatever ntp server you require.