Signature Version 4 is maximum for a week. In Python I did:
s3_client = boto3.client('s3',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
config=botocore.client.Config(signature_version='s3')
)
return s3_client.generate_presigned_url(
'get_object',
Params={
'Bucket': bucket_name,
'Key': key
},
ExpiresIn=400000000) # this is a max: ~ten years
But for Go I found only func (*Request) Presign:
req, _ := s3Client.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String(bucketName),
Key: &key,
})
tenYears := time.Now().AddDate(10, 0, 0).Sub(time.Now())
url, err := req.Presign(tenYears)
HTTP response for such URL is:
AuthorizationQueryParametersError: X-Amz-Expires must be less than a week (in seconds); that is, the given X-Amz-Expires must be less than 604800 seconds.
No way to presign URL in Go using AWS SDK for years?
If you want to pre-sign URL for longer than a week, then your use case for pre-signed URLs is not valid. According to the spec it is really just one week.
Pre-signed URLs are often used to serve content from S3 to authenticated users only.
Related
The response of PUT request with signed URL doesn't contain header Access-Control-Allow-Origin.
import os
from datetime import timedelta
import requests
from google.cloud import storage
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = <path to google credentials>
client = storage.Client()
bucket = client.get_bucket('my_bucket')
policies = [
{
'origin': ['*'],
'method': ['PUT'],
}
]
bucket.cors = policies
bucket.update()
blob = bucket.blob('new_file')
url = blob.generate_signed_url(timedelta(days=30), method='PUT')
response = requests.put(url, data='some data')
for header in response.headers.keys():
print(header)
Output:
X-GUploader-UploadID
ETag
x-goog-generation
x-goog-metageneration
x-goog-hash
x-goog-stored-content-length
x-goog-stored-content-encoding
Vary
Content-Length
Date
Server
Content-Type
Alt-Svc
As you can see there is no CORS-headers. So, can I conclude that GCS doesn't support CORS properly/fully?
Cross Origin Resource Sharing (CORS) allows interactions between resources from different origins. By default, in Google Cloud Storage it is prohibited/disabled in order to prevent malicious behavior.
You can enable it either using Cloud Libraries, Rest API or Cloud SDK, keeping in mind following rules:
Authenticate using user/service account with the permissions for Cloud Storage type: FULL_CONTROL.
Using XML API to get proper CORS headers, use one of the two URLs:
- storage.googleapis.com/[BUCKET_NAME]
- [BUCKET_NAME].storage.googleapis.com
Origin storage.cloud.google.com/[BUCKET_NAME] will not respond with CORS header.
Request need proper ORIGIN header to match bucket policy ORIGIN configuration as stated in the point 3 of the CORS troubleshooting documentation, in case of your code:
headers = {
'ORIGIN': '*'
}
response = requests.put(url, data='some data', headers=headers)
for header in response.headers.keys():
print(header)
gives following output:
X-GUploader-UploadID
ETag
x-goog-generation
x-goog-metageneration
x-goog-hash
x-goog-stored-content-length
x-goog-stored-content-encoding
Access-Control-Allow-Origin
Access-Control-Expose-Headers
Content-Length
Date
Server
Content-Type
I had this issue. For me the problem was I was using POST instead of PUT. Furthermore, I had to set the Content-Type of the upload to match the content type used to generate the form. The default Content-Type in the demo is "application/octet-stream", so I had to change it to be whatever was the content type of the upload. When doing the XMLHttpRequest, I just had to send the file directly instead of using FormData.
This was how I got the signed url.
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'application/octet-stream',
};
// Get a v4 signed URL for uploading file
const [url] = await storage
.bucket("lsa-storage")
.file(upload.id)
.getSignedUrl(options as any);
I create a pre-signed URL and get back something like
https://s3.amazonaws.com/MyBucket/MyItem/
?X-Amz-Security-Token=TOKEN
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Date=20171206T014837Z
&X-Amz-SignedHeaders=host
&X-Amz-Expires=3600
&X-Amz-Credential=CREDENTIAL
&X-Amz-Signature=SIGNATURE
I can now curl this no problem. However, if I now add another query parameter, I will get back a 403, i.e.
https://s3.amazonaws.com/MyBucket/MyItem/
?X-Amz-Security-Token=TOKEN
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Date=20171206T014837Z
&X-Amz-SignedHeaders=host
&X-Amz-Expires=3600
&X-Amz-Credential=CREDENTIAL
&X-Amz-Signature=SIGNATURE
&Foo=123
How come? Is it possible to generate a pre-signed url that supports custom queries?
It seems to be technically feasible to insert custom query parameters into a v4 pre-signed URL, before it is signed, but not all of the AWS SDKs expose a way to do this.
Here's an example of a roundabout way to do this with the AWS JavaScript SDK:
const AWS = require('aws-sdk');
var s3 = new AWS.S3({region: 'us-east-1', signatureVersion: 'v4'});
var req = s3.getObject({Bucket: 'mybucket', Key: 'mykey'});
req.on('build', () => { req.httpRequest.path += '?session=ABC123'; });
console.log(req.presign());
I've tried this with custom query parameters that begin with X- and without it. Both appeared to work fine. I've tried with multiple query parameters (?a=1&b=2) and that worked too.
The customized pre-signed URLs work correctly (I can use them to get S3 objects) and the query parameters make it into CloudWatch Logs so can be used for correlation purposes.
Note that if you want to supply a custom expiration time, then do it as follows:
const Expires = 120;
const url = req.presign(Expires);
I'm not aware of other (non-JavaScript) SDKs that allow you to insert query parameters into the URL construction process like this so it may be a challenge to do this in other languages. I'd recommend using a small JavaScript Lambda function (or API Gateway plus Lambda function) that would simply create and return the customized pre-signed URL.
The custom query parameters are also tamper-proof. They are included in the signing of the URL so, if you tamper with them, the URL becomes invalid, yielding 403 Forbidden.
I used this code to generate your pre-signed URL. The result was:
https://s3.amazonaws.com/MyBucket/MyItem
?Foo=123
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIA...27%2Fus-east-1%2Fs3%2Faws4_request
&X-Amz-Date=20180427T0012345Z
&X-Amz-Expires=3600
&X-Amz-Signature=e3...7b
&X-Amz-SignedHeaders=host
None of this is a guarantee that this technique will continue to work, of course, if AWS changes things under the covers but for right now it seems to work and is certainly useful.
Attribution: the source of this discovery was aws-sdk-js/issues/502.
If you change one of the headers or add / subtract, then you have to resign the URL.
This is part of the AWS signing design and this process is designed for higher levels of security. One of the AWS reasons for changing to signing version 4 from signing version 2.
The signing design does not know which headers are important and which are not. That would create a nightmare trying to track all of the AWS services.
I created this solution for Ruby SDK. It is sort of a hack, but it works as expected:
require 'aws-sdk-s3'
require 'active_support/core_ext/object/to_query.rb'
# Modified S3 pre signer class that can inject query params to the URL
#
# Usage example:
#
# bucket_name = "bucket_name"
# key = "path/to/file.json"
# filename = "download_file_name.json"
# duration = 3600
#
# params = {
# bucket: bucket_name,
# key: key,
# response_content_disposition: "attachment; filename=#{filename}",
# expires_in: duration
# }
#
# signer = S3PreSignerWithQueryParams.new({'x-your-custom-field': "banana", 'x-some-other-field': 1234})
# url = signer.presigned_url(:get_object, params)
#
# puts "url = #{url}"
#
class S3PreSignerWithQueryParams < Aws::S3::Presigner
def initialize(query_params = {}, options = {})
#query_params = query_params
super(options)
end
def build_signer(cfg)
signer = super(cfg)
my_params = #query_params.to_h.to_query()
signer.define_singleton_method(:presign_url,
lambda do |options|
options[:url].query += "&" + my_params
super(options)
end)
signer
end
end
While not documented, you can add parameters as arguments to the call to presigned_url.
obj.presigned_url(:get,
expires_in: expires_in_sec,
response_content_disposition: "attachment"
)
https://bucket.s3.us-east-2.amazonaws.com/file.txt?response-content-disposition=attachment&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=PUBLICKEY%2F20220309%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Date=20220309T031958Z&X-Amz-Expires=43200&X-Amz-SignedHeaders=host&X-Amz-Signature=SIGNATUREVALUE
If you are looking on for JavaScript SDK V3:
import { HttpRequest } from "#aws-sdk/protocol-http";
import { S3RequestPresigner } from "#aws-sdk/s3-request-presigner";
import { parseUrl } from "#aws-sdk/url-parser";
import { Sha256 } from "#aws-crypto/sha256-browser";
import { Hash } from "#aws-sdk/hash-node";
import { formatUrl } from "#aws-sdk/util-format-url";
// Make custom query in Record<string, string | Array<string> | null> format
const customQuery = {
hello: "world",
};
const s3ObjectUrl = parseUrl(
`https://${bucketName}.s3.${region}.amazonaws.com/${key}`
);
s3ObjectUrl.query = customQuery; //Insert custom query here
const presigner = new S3RequestPresigner({
credentials,
region,
sha256: Hash.bind(null, "sha256"), // In Node.js
//sha256: Sha256 // In browsers
});
// Create a GET request from S3 url.
const url = await presigner.presign(new HttpRequest(s3ObjectUrl));
console.log("PRESIGNED URL: ", formatUrl(url));
Code template taken from: https://aws.amazon.com/blogs/developer/generate-presigned-url-modular-aws-sdk-javascript/
I'm trying to figure out a way to generate torrent files from a bucket, using the AWS SDK for Go.
I'm using a pre-signed url (since its a private bucket):
svc := s3.New(session.New(config))
req, _ := svc.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String("bucketName"),
Key: "key",
})
// sign the url
url, err := req.Presign(120 * time.Minute)
From the docs, to generate a torrent, the syntax:
GET /ObjectName?torrent HTTP/1.1
Host: BucketName.s3.amazonaws.com
Date: date
Authorization: authorization string
How do I add the ?torrent parameter to a presigned url in GO?
Try GetOBjectTorrent method on the AWS Go SDK. It will return you .torrent file in response, here is example code:
svc := s3.New(session.New())
input := &s3.GetObjectTorrentInput{
Bucket: aws.String("bucketName"),
Key: aws.String("key"),
}
result, _ := svc.GetObjectTorrent(input)
fmt.Println(result)
Please see http://docs.aws.amazon.com/sdk-for-go/api/service/s3/#S3.GetObjectTorrent for more details. Hope it helps.
I'm trying to upload an image to S3 via putObject and a pre-signed URL.
Here is the URL that was provided when I generated the pre-signed URL:
https://<myS3Bucket>.s3.amazonaws.com/1ffd1c88-5661-48f9-a135-04bd569614dd.jpg?AWSAccessKeyId=<accessKey>&Expires=1458177431311&Signature=<signature>-amz-security-token=<token>
When I attempt to upload the file via a PUT, AWS responds with:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid date (should be seconds since epoch): 1458177431311</Message>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
Here is the curl version of the request I was using:
curl -X PUT -H "Cache-Control: no-cache" -H "Postman-Token: 78e46be3-8ecc- 4156-be3d-7e2f4688a127" -H "Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW" -F "file=#[object Object]" "https://<myS3Bucket>.s3.amazonaws.com/1ffd1c88-5661-48f9-a135-04bd569614dd.jpg?AWSAccessKeyId=<accessKey>&Expires=1458177431311&Signature=<signature>-amz-security-token=<mySecurityToken>"
Since the timestamp is generated by AWS, it should be correct. I have tried changing it to include decimals and got the same error.
Could the problem be in the way I'm uploading the file in my request?
Update - Add code for generating the signed URL
The signed URL is being generated via the AWS Javascript SDK:
var AWS = require('aws-sdk')
var uuid = require('node-uuid')
var Promise = require('bluebird')
var s3 = new AWS.S3()
var params = {
Bucket: bucket, // bucket is stored as .env variable
Key: uuid.v4() + '.jpg' // file is always a jpg
}
return new Promise(function (resolve, reject) {
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
reject(new Error(err))
}
var payload = { url: url }
resolve(payload)
})
})
My access key and secret key are loaded via environment variables as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Amazon S3 only supports 32 bit timestamps in signed urls:
2147483647 is the largest timestamp you can have.
I was creating timestamps 20 years in the future and it was breaking my signed URL's. Using a value less than 2,147,483,647 fixes the issue.
Hope this helps someone!
You just get bitten by bad documentation. Seems relate to this link
Amazon S3 invalid date when using expires in url_for
The integer are mean for " (to specify the number of seconds after the current time)".
Since you enter the time in epoch(0 = 1970-1-1), So it is current epoch time + your time, 46+46 years= 92 years. Appear to be crazy expiration period for s3.
I'm having a bit of trouble saving a file in golang with the AWS S3 go sdk (https://github.com/awslabs/aws-sdk-go).
This is what I have:
import (
"fmt"
"bytes"
"github.com/awslabs/aws-sdk-go/aws"
"github.com/awslabs/aws-sdk-go/aws/awsutil"
"github.com/awslabs/aws-sdk-go/service/s3"
)
func main() {
cred := aws.DefaultChainCredentials
cred.Get() // i'm using environment variable credentials and yes, I checked if they were in here
svc := s3.New(&aws.Config{Region: "us-west-2", Credentials:cred, LogLevel: 1})
params := &s3.PutObjectInput{
Bucket: aws.String("my-bucket-123"),
Key: aws.String("test/123/"),
Body: bytes.NewReader([]byte("testing!")),
}
resp, err := svc.PutObject(params)
fmt.Printf("response %s", awsutil.StringValue(resp))
}
I keep receiving a 301 Moved Permanently response.
Edit: I created the bucket manually.
Edit #2: Example response:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 301 Moved Permanently
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Tue, 05 May 2015 18:42:03 GMT
Server: AmazonS3
POST sign is http as well.
According to Amazon:
Amazon S3 supports virtual-hosted-style and path-style access in all regions. The path-style syntax, however, requires that you use the region-specific endpoint when attempting to access a bucket. For example, if you have a bucket called mybucket that resides in the EU, you want to use path-style syntax, and the object is named puppy.jpg, the correct URI is http://s3-eu-west-1.amazonaws.com/mybucket/puppy.jpg. You will receive a "PermanentRedirect" error, an HTTP response code 301, and a message indicating what the correct URI is for your resource if you try to access a bucket outside the US Standard region with path-style syntax that uses either of the following:
http://s3.amazonaws.com
An endpoint for a region different from the one where the bucket resides, for example, http://s3-eu-west-1.amazonaws.com for a bucket that was created in the US West (Northern California) region
I think the problem is that you are trying to access a bucket in the wrong region. Your request is going here:
https://my-bucket-123.s3-us-west-2.amazonaws.com/test/123
So make sure that my-bucket-123 is actually in us-west-2. (I tried this with my own bucket and it worked fine)
I also verified that it's using HTTPS by wrapping the calls: (their log message is just wrong)
type LogReadCloser struct {
io.ReadCloser
}
func (lr *LogReadCloser) Read(p []byte) (int, error) {
n, err := lr.ReadCloser.Read(p)
log.Println(string(p))
return n, err
}
type LogRoundTripper struct {
http.RoundTripper
}
func (lrt *LogRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
log.Println("REQUEST", req)
res, err := lrt.RoundTripper.RoundTrip(req)
log.Println("RESPONSE", res, err)
res.Body = &LogReadCloser{res.Body}
return res, err
}
And then:
svc := s3.New(&aws.Config{
Region: "us-west-2",
Credentials: cred,
LogLevel: 0,
HTTPClient: &http.Client{Transport: &LogRoundTripper{http.DefaultTransport}},
})
I think you're better off using the S3 Uploader. Here's an example from my code, it's a web app, I use gin framework, and in my case I get a file from a web form, upload it to s3, and retrieve the URL to present a page in other HTMLs:
// Create an S3 Uploader
uploader := s3manager.NewUploader(sess)
// Upload
result, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(bucket),
Key: aws.String(fileHeader.Filename),
Body: f,
})
if err != nil {
c.HTML(http.StatusBadRequest, "create-project.html", gin.H{
"ErrorTitle": "S3 Upload Failed",
"ErrorMessage": err.Error()})
} else {
// Success, print URL to Console.
///"result.Location is the URL of an Image"
fmt.Println("Successfully uploaded to", result.Location)
}
You can find an entire example here, explained step by step:
https://www.matscloud.com/docs/cloud-sdk/go-and-s3/