I need to access an object from S3 using presigned URL. The problem is that the client doesn't have a public IPv4, so I need to use IPv6. Usually you can just add dualstack to the URL (see https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html) but a presigned URL includes the certificate only for the one generated. So if you try what I just mentioned, you get this in your response:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>ACCESSKEY</AWSAccessKeyId>
<StringToSign>foo</StringToSign>
<SignatureProvided>foo</SignatureProvided>
<StringToSignBytes>41 [ ... ] 55 68 </StringToSignBytes>
<CanonicalRequest>GET /FILE.zip X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=FOO%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20220901T103255Z&X-Amz-Expires=120&X-Amz-Security-Token=TOKEN_FOO&X-Amz-SignedHeaders=host&response-content-disposition=inline host:bucket.s3.dualstack.eu-central-1.amazonaws.com host UNSIGNED-PAYLOAD</CanonicalRequest>
<CanonicalRequestBytes>47 [ ... ] 41 44</CanonicalRequestBytes>
<RequestId>RFGJSDFK454B9967</RequestId>
<HostId>FOO</HostId>
</Error>
I know there are workarounds without using presigned URL, but I'd like to keep the simplicity of the solution
How are you generating the presigned URL? Have you configured the CLI/SDK to use the dual stack endpoints?
boto3 example:
import boto3
from botocore.config import Config
client = boto3.client("s3", config=Config(s3={'use_dualstack_endpoint': True}))
url = client.generate_presigned_url("get_object", Params={"Bucket": "sampleipv6bucket", "Key": "test.json"})
Result:
https://sampleipv6bucket.s3.dualstack.us-east-1.amazonaws.com/test.json?AWSAccessKeyId=ASIA.....
Related
i'm using cloudflare cd to serve my website
all my static files and downloads on gcs and i'm using GCS pytho library for generating signed url v4 i case users want to download some files from my website
the problem is when i use this function generate_download_signed_url_v4 from google
it give me back the signed url with link statring with https://storage.googleapis.com/my_bucket/........
i want to change this link with my own sub domain ex. download.doamin.com
i figured out that i have to use bucket_bound_hostname
but when i use it and try to download with given url i get this message
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
and this is the fuction i use
def generate_download_signed_url_v4(bucket_name, blob_name):
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
url = blob.generate_signed_url(
version="v4",
# This URL is valid for ...
expiration=datetime.timedelta(minutes=10),
# Allow GET requests using this URL.
method="GET",
bucket_bound_hostname="mysub.domain.com",
scheme='https'
)
return url
PS. i've added Cname in dns setting for c.storage.googleapis.com
My requirement is to upload multiple webm files(which are captured using webrtc) to s3 using one time generated pre-signed url.
I have tried below code to generate pre-signed url and using postman to upload files
def create_presigned_url(method_name,s3object,expiration=36000):
try:
response = s3_client.generate_presigned_post(S3Bucket,
Key = "",
Fields=None,
Conditions = [
["content-length-range", 100, 1000000000],
["starts-with", "$key", "/path-to-file/]
],
ExpiresIn=expiration)
except Exception as e:
logging.error(e)
return None
return response
Getting the below error when i tried from postman
Wildcards are not supported in presigned URLs.
I have not been able to find any documentation that clearly states this, however I had to achieve the same today and my findings show that it is not possible.
I created a presigned URL with the key test/*.
I was only able to retrieve the content of a file in S3 that was named test/*, but not any other files with the test/ prefix. For each of the other files the request failed because "The request signature we calculated does not match the signature you provided. Check your key and signing method.".
This error specifically states that the request does not match the signature, which is different than when I made a sign url to an object that does not exist and the request fails because the key could not be found.
I'm trying to provide a pre-signed url that, once the image is uploaded, grants the group Everyone read access to the uplodaded image.
So far, I'm generating the pre-signed url with the following steps:
val req = GeneratePresignedUrlRequest(params.s3Bucket,"$uuid.jpg",HttpMethod.PUT)
req.expiration = expiration
req.addRequestParameter("x-amz-acl","public-read")
req.addRequestParameter("ContentType","image/jpeg")
val url: URL = s3Client.generatePresignedUrl(req)
But the image, once I check in S3, does not have the expected read access.
The HTTP client that performs the upload needs to include the x-amz-acl: public-read header.
In your example, you're generating a request that includes that header. But, then you're generating a presigned URL from that request.
URLs don't contain HTTP headers, so whatever HTTP client you're using to perform the actual upload is not sending setting the header when it sends the request to the generated URL.
This simple answer is working for me.
val url = getS3Connection()!!.generatePresignedUrl(
"bucketname", "key",
Date(Date().time + 1000 * 60 * 300)
)
Excepted: I want to get signed urls with my AWS CloudFront url.
What I have done: I have created a AWS CloudFront instence and enabled Restrict Viewer Access function, Trusted Signers is Self.
Below is the php code I want to sign the url
function getSignedURL()
{
$resource = 'http://d2qui8qg6d31zk.cloudfront.net/richardcuicks3sample/140-140.bmp';
$timeout = 300;
//This comes from key pair you generated for cloudfront
$keyPairId = "YOUR_CLOUDFRONT_KEY_PAIR_ID";
$expires = time() + $timeout; //Time out in seconds
$json = '{"Statement":[{"Resource":"'.$resource.'","Condition":{"DateLessThan":{"AWS:EpochTime":'.$expires.'}}}]}';
//Read Cloudfront Private Key Pair
$fp=fopen("private_key.pem","r");
$priv_key=fread($fp,8192);
fclose($fp);
//Create the private key
$key = openssl_get_privatekey($priv_key);
if(!$key)
{
echo "<p>Failed to load private key!</p>";
return;
}
//Sign the policy with the private key
if(!openssl_sign($json, $signed_policy, $key, OPENSSL_ALGO_SHA1))
{
echo '<p>Failed to sign policy: '.openssl_error_string().'</p>';
return;
}
//Create url safe signed policy
$base64_signed_policy = base64_encode($signed_policy);
$signature = str_replace(array('+','=','/'), array('-','_','~'), $base64_signed_policy);
//Construct the URL
$url = $resource.'?Expires='.$expires.'&Signature='.$signature.'&Key-Pair-Id='.$keyPairId;
return $url;
}
For $keyPairId and private_key.pem, I logged in my root account and generated this two variables in Security Credentials->CloudFront Key Pairs section.
If I access http://d2qui8qg6d31zk.cloudfront.net/richardcuicks3sample/140-140.bmp on browser directly. It will response like
<Error>
<Code>MissingKey</Code>
<Message>
Missing Key-Pair-Id query parameter or cookie value
</Message>
</Error>
After I run the function, I got a long signed url, parse the url on chrome browser, it will response like
<Error>
<Code>InvalidKey</Code>
<Message>Unknown Key</Message>
</Error>
Question: I have search AWS document and google much time about this, Could anyone tell me why this happened or if I miss something? Thanks in advance!
$priv_key=fread($fp,8192);
If I understand, you generated the key. If so, it looks like you are setting a key size that is not supported.
The key pair must be an SSH-2 RSA key pair.
The key pair must be in base64 encoded PEM format.
The supported key lengths are 1024, 2048, and 4096 bit
Docs: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-trusted-signers.html#private-content-creating-cloudfront-key-pairs
I opted for Trusted Key Groups and i got that invalidkey/unknownkey error when i initially thought that the keypair id is the same as the access key id under "My Security Credentials". The correct one to use is that ID from your public keys (CloudFront > Key Management > Public Keys).
Thanks #imperalix for answering this question.
I have solved this issue,
Inspired by this site, I found I used the wrong CloudFront url to be signed.
Before: http://d2qui8qg6d31zk.cloudfront.net/richardcuicks3sample/140-140.bmp
After: http://d2qui8qg6d31zk.cloudfront.net/140-140.bmp
Because I create the CloudFront distribution for the richardcuicks3sample bucket, so don't need include this bucket name in the url. After I changed the url, the signed url works well.
Im using boto's S3 Key.generate_url to get a expired signed token. This works as expected, except when used with cross-origin requests (CORS). Since my bucket is not in the default region it will get a 307 redirect and this is making the OPTIONS request redirect and fail. If I change the hostname of the generated url to my bucket's location ie xxbucket-namexx.s3-us-west-1.amazonaws.com the CORS request works fine.
Any ideas of how to get this to always generate the canonical hostname for these generated URLs?
Here is how I'm generating the URL.
conn = boto.connect_s3()
bucket = conn.get_bucket('bucket-name')
key = Key(bucket)
key.key = 'image-name'
signed_url = key.generate_url(
60*5, method='PUT', policy='public-read',
force_http=True,
headers={'Content-Type': self.request.get('contentType'),
'Content-Length': self.request.get('contentLength')},
)