how to generate download signed url v4 in gcs and cloudflare - google-cloud-platform

i'm using cloudflare cd to serve my website
all my static files and downloads on gcs and i'm using GCS pytho library for generating signed url v4 i case users want to download some files from my website
the problem is when i use this function generate_download_signed_url_v4 from google
it give me back the signed url with link statring with https://storage.googleapis.com/my_bucket/........
i want to change this link with my own sub domain ex. download.doamin.com
i figured out that i have to use bucket_bound_hostname
but when i use it and try to download with given url i get this message
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
and this is the fuction i use
def generate_download_signed_url_v4(bucket_name, blob_name):
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
url = blob.generate_signed_url(
version="v4",
# This URL is valid for ...
expiration=datetime.timedelta(minutes=10),
# Allow GET requests using this URL.
method="GET",
bucket_bound_hostname="mysub.domain.com",
scheme='https'
)
return url
PS. i've added Cname in dns setting for c.storage.googleapis.com

Related

Is there a way to access S3 Presigned URL via IPv6?

I need to access an object from S3 using presigned URL. The problem is that the client doesn't have a public IPv4, so I need to use IPv6. Usually you can just add dualstack to the URL (see https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html) but a presigned URL includes the certificate only for the one generated. So if you try what I just mentioned, you get this in your response:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>ACCESSKEY</AWSAccessKeyId>
<StringToSign>foo</StringToSign>
<SignatureProvided>foo</SignatureProvided>
<StringToSignBytes>41 [ ... ] 55 68 </StringToSignBytes>
<CanonicalRequest>GET /FILE.zip X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=FOO%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20220901T103255Z&X-Amz-Expires=120&X-Amz-Security-Token=TOKEN_FOO&X-Amz-SignedHeaders=host&response-content-disposition=inline host:bucket.s3.dualstack.eu-central-1.amazonaws.com host UNSIGNED-PAYLOAD</CanonicalRequest>
<CanonicalRequestBytes>47 [ ... ] 41 44</CanonicalRequestBytes>
<RequestId>RFGJSDFK454B9967</RequestId>
<HostId>FOO</HostId>
</Error>
I know there are workarounds without using presigned URL, but I'd like to keep the simplicity of the solution
How are you generating the presigned URL? Have you configured the CLI/SDK to use the dual stack endpoints?
boto3 example:
import boto3
from botocore.config import Config
client = boto3.client("s3", config=Config(s3={'use_dualstack_endpoint': True}))
url = client.generate_presigned_url("get_object", Params={"Bucket": "sampleipv6bucket", "Key": "test.json"})
Result:
https://sampleipv6bucket.s3.dualstack.us-east-1.amazonaws.com/test.json?AWSAccessKeyId=ASIA.....

How to use wildcards for key/object to generate aws pre-signed url in django

My requirement is to upload multiple webm files(which are captured using webrtc) to s3 using one time generated pre-signed url.
I have tried below code to generate pre-signed url and using postman to upload files
def create_presigned_url(method_name,s3object,expiration=36000):
try:
response = s3_client.generate_presigned_post(S3Bucket,
Key = "",
Fields=None,
Conditions = [
["content-length-range", 100, 1000000000],
["starts-with", "$key", "/path-to-file/]
],
ExpiresIn=expiration)
except Exception as e:
logging.error(e)
return None
return response
Getting the below error when i tried from postman
Wildcards are not supported in presigned URLs.
I have not been able to find any documentation that clearly states this, however I had to achieve the same today and my findings show that it is not possible.
I created a presigned URL with the key test/*.
I was only able to retrieve the content of a file in S3 that was named test/*, but not any other files with the test/ prefix. For each of the other files the request failed because "The request signature we calculated does not match the signature you provided. Check your key and signing method.".
This error specifically states that the request does not match the signature, which is different than when I made a sign url to an object that does not exist and the request fails because the key could not be found.

S3 - Upload - how to generate a pre-signed url that gives EVERYONE read access to the object?

I'm trying to provide a pre-signed url that, once the image is uploaded, grants the group Everyone read access to the uplodaded image.
So far, I'm generating the pre-signed url with the following steps:
val req = GeneratePresignedUrlRequest(params.s3Bucket,"$uuid.jpg",HttpMethod.PUT)
req.expiration = expiration
req.addRequestParameter("x-amz-acl","public-read")
req.addRequestParameter("ContentType","image/jpeg")
val url: URL = s3Client.generatePresignedUrl(req)
But the image, once I check in S3, does not have the expected read access.
The HTTP client that performs the upload needs to include the x-amz-acl: public-read header.
In your example, you're generating a request that includes that header. But, then you're generating a presigned URL from that request.
URLs don't contain HTTP headers, so whatever HTTP client you're using to perform the actual upload is not sending setting the header when it sends the request to the generated URL.
This simple answer is working for me.
val url = getS3Connection()!!.generatePresignedUrl(
"bucketname", "key",
Date(Date().time + 1000 * 60 * 300)
)

How do I serve index.html in subfolders with S3/Cloudfront?

Got a bucket called www.foo.site. In that site there's a landing page, an about page and some pages in a few bar/* folders. Each bar/* has an index.html page: bar/a/index.html, bar/b/index.html etc.
The landing page is running fine (meaning www.foo.site will load when I browse to it) but the /about/index.html page and /bar/index.html pages aren't getting served when I click on my about links etc. If I curl the URLs I get 404. I've tried setting the origin path and origin domain name separately:
First try:
domain name: www.foo.site.s3.amazonaws.com
origin path: (blank)
Second try:
domain name: s3-us-west-1.amazonaws.com
origin path: www.foo.site
Default document is index.html for both.
Neither one worked. All of the S3 pages mentioned above are directly browsable. Meaning https://s3-us-west-1.amazonaws.com/www.foo.site/bar/index.html loads the expected html.
This must be some Cloudfront setting I'm missing. Possibly something in my DNS records? Is it possible to serve html files in S3 "folders" via Cloudfront?
Here are a couple of resources that are helpful when serving index.html from S3 implicitly via https://domain/folder/ rather than having to explicitly use https://domain/folder/index.html:
Serving index pages from a non-root location via CloudFront
Serving index pages from a non-root location via CloudFront (now unavailable)
Implementing Default Indexes in CloudFront Origins Using Lambda#Edge
The key thing when configuring your CloudFront distribution is:
do not configure a default root object for your CloudFront distribution
If you configure index.html as the default root object then https://domain/ will correctly serve https://domain/index.html but no subfolder reference such as https://domain/folder/ will work.
It's also important to not use the dropdown in Cloudfront when connecting the CF distribution to the S3 bucket. You need to use the URL for the S3 static site instead.
CloudFront serves S3 files with keys ending by /
After investigation, it appears that one can create this type of files in S3 programatically. Therefore, I wrote a small lambda that is triggered when a file is created on S3, with a suffix index.html or index.htm
What it does is copying the object dir/subdir/index.html into the object dir/subdir/
import json
import boto3
s3_client = boto3.client("s3")
def lambda_handler(event, context):
for f in event['Records']:
bucket_name = f['s3']['bucket']['name']
key_name = f['s3']['object']['key']
source_object = {'Bucket': bucket_name, 'Key': key_name}
file_key_name = False
if key_name[-10:].lower() == "index.html" and key_name.lower() != "index.html":
file_key_name = key_name[0:-10]
elif key_name[-9:].lower() == "index.htm" and key_name.lower() != "index.htm":
file_key_name = key_name[0:-9]
if file_key_name:
s3_client.copy_object(CopySource=source_object, Bucket=bucket_name, Key=file_key_name)

Can boto generate a url that is canonical to S3 bucket? Getting CORS redirect issue

Im using boto's S3 Key.generate_url to get a expired signed token. This works as expected, except when used with cross-origin requests (CORS). Since my bucket is not in the default region it will get a 307 redirect and this is making the OPTIONS request redirect and fail. If I change the hostname of the generated url to my bucket's location ie xxbucket-namexx.s3-us-west-1.amazonaws.com the CORS request works fine.
Any ideas of how to get this to always generate the canonical hostname for these generated URLs?
Here is how I'm generating the URL.
conn = boto.connect_s3()
bucket = conn.get_bucket('bucket-name')
key = Key(bucket)
key.key = 'image-name'
signed_url = key.generate_url(
60*5, method='PUT', policy='public-read',
force_http=True,
headers={'Content-Type': self.request.get('contentType'),
'Content-Length': self.request.get('contentLength')},
)