How can I set S3 putObject options when using signed URL's to upload files from the client - amazon-web-services

I am using signed urls to upload files directly from the client straight into my S3 bucket. To do this I perform a direct put request to upload the file itself after having creating a signed URL of the command I want to perform.
I create the signed url like so:
$command = $s3->getCommand('PutObject', array(
'Bucket' => $this->_bucket,
'Key' => $key,
'ACL' => 'public-read',
'CacheControl' => 'max-age=0',
'ContentEncoding' => 'gzip',
'ContentType' => $filetype,
'Body' => '',
'ContentMD5' => false
));
$signedUrl = $command->createPresignedUrl('+6 hours');
However, after then performing the put request and uploading the file itself the Cache-Control and Content-Encoding headers are not set.
Does anyone have any idea where I am going wrong?

The headers still have to be set in the PUT request. Including them in the signed url isn't sufficient.
The pre-signed URL only serves to ensure that the actual request parameters match the authorized request parameters (otherwise the request fails).
So, if what I am saying is correct, then if these parameters aren't being sent with the request, it should fail, right? Almost.
Unfortunately, V2 authentication does not validate all request headers, such as Content-Encoding for example:
Note how only the Content-Type and Content-MD5 HTTP entity headers appear in the StringToSign. The other Content-* entity headers do not.
— http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
The same is true for Cache-Control. Only x-amz-* headers are subject to validation against the signature provided in V2 (which uses &Signature= in the Query string).
V4 auth (which, by contrast, uses &X-Amz-Signature= in the query string) contains a mechanism allowing you to specify which headers need validation against the signature, but in either case, you have to send the headers with the actual request itself, not just include them in the signature. It appears that you aren't, and that's why they are not set.

Related

Call S3 pre-signed URL with postman

I am attempting to use a pre-signed URL to upload as described in the docs (https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html) I can retrieve the pre-signed URL but when I attempt to do a PUT in Postman, I receive the following error:
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
Obviously, the way my put call is structured doesn't match with the way AWS is calculating the signature. I can't find a lot of information on what this put call requires.
I've attempted to modify the header for Content-Type to multipart/form-data and application/octet-stream. I've also tried to untick the headers section in postman and rely on the body type for both form-data and binary settings where I select the file. The form-data setting results in the following added to the call:
Content-Disposition: form-data; name="thefiletosend.txt"; filename="thefiletosend.txt
In addition, I noticed that postman is including what it calls "temporary headers" as follows:
Host: s3.amazonaws.com
Content-Type: text/plain
User-Agent: PostmanRuntime/7.13.0
Accept: /
Cache-Control: no-cache
Postman-Token: e11d1ef0-8156-4ca7-9317-9f4d22daf6c5,2135bc0e-1285-4438-bb8e-b21d31dc36db
Host: s3.amazonaws.com
accept-encoding: gzip, deflate
content-length: 14
Connection: keep-alive
cache-control: no-cache
The Content-Type header may be one of the issues, but I'm not certain how to exclude these "temporary headers" in postman.
I am generating the pre-signed URL in a lambda as follows:
public string FunctionHandler(Input input, ILambdaContext context)
{
_logger = context.Logger;
_key = input.key;
_bucketname = input.bucketname;
string signedURL = _s3Client.GetPreSignedURL(new GetPreSignedUrlRequest()
{
Verb = HttpVerb.PUT ,
Protocol = Protocol.HTTPS,
BucketName = _bucketname,
Key = _key,
Expires = DateTime.Now.AddMinutes(5)
});
returnObj returnVal = new returnObj() { url = signedURL };
return JsonConvert.SerializeObject(returnVal);
}
Your pre-signed url should be like https://bucket-name.s3.region.amazonaws.com/folder/filename.jpg?AWSAccessKeyId=XXX&Content-Type=image%2Fjpeg&Expires=XXX&Signature=XXX
You can upload to S3 with postman by
Set above url as endpoint
Select PUT request,
Body -> binary -> Select file
I was able to get this working in Postman using a POST request. Here are the details of what worked for me. When I call my lambda to get a presigned URL here is the json that comes back (after I masked sensitive and app-specific information):
{
"attachmentName": "MySecondAttachment.docx",
"url": "https://my-s3-bucket.s3.amazonaws.com/",
"fields": {
"acl": "public-read",
"Content-Type": "multipart/form-data",
"key": "attachment-upload/R271645/65397746_MySecondAttachment.docx",
"x-amz-algorithm": "AWS4-HMAC-SHA256",
"x-amz-credential": "WWWWWWWW/20200318/us-east-1/s3/aws4_request",
"x-amz-date": "20200318T133309Z",
"x-amz-security-token": "XXXXXXXX",
"policy": "YYYYYYYY",
"x-amz-signature": "ZZZZZZZZ"
}
}
In Postman, create a POST request, and use “form-data” to enter in all the fields you got back, with exactly the same field names you got back in the signedURL shown above. Do not set the content type, however. Then add one more key named “file”:
To the right of the word file if you click the drop-down you can browse to your file and attach it:
In case it helps, I’m using a lambda written in python to generate a presigned URL so a user can upload an attachment. The code looks like this:
signedURL = self.s3.generate_presigned_post(
Bucket= "my-s3-bucket",
Key=putkey,
Fields = {"acl": "public-read", "Content-Type": "multipart/form-data"},
ExpiresIn = 15,
Conditions = [
{"acl": "public-read"},
["content-length-range", 1, 5120000]
]
)
Hope this helps.
Your pre-signed url should be like https://bucket-name.s3.region.amazonaws.com/folder/filename.jpg?AWSAccessKeyId=XXX&Content-Type=image%2Fjpeg&Expires=XXX&Signature=XXX
You can upload to S3 with postman by
Set above url as endpoint
Select PUT request,
Body -> binary -> Select file
I was facing the same problem and below is how it worked for me.
Note, I am making signed URL by using AWS S3 Java SDK as my backend is in Java. I gave content type as "application/octet-stream" while creating this signed Url so that any type of content can be uploaded. Below is my java code generating signed url.
public String createS3SignedURLUpload(String bucketName, String objectKey) {
try {
PutObjectRequest objectRequest = PutObjectRequest.builder().bucket(bucketName).key(objectKey)
.contentType("**application/octet-stream**").build();
S3Presigner presigner = S3Presigner.builder().region(s3bucketRegions.get(bucketName))
.credentialsProvider(StaticCredentialsProvider.create(awsBasicCredentials)).build();
PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(presignedURLTimeoutInMins)).putObjectRequest(objectRequest)
.build();
PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);
return presignedRequest.url().toString();
} catch (Exception e) {
throw new CustomRuntimeException(e.getMessage());
}
}
## Now to upload file using Postman
Set the generated url as endpoint
Select PUT request,
Body -> binary -> Select file
Set header Content-Type as application/octet-stream (This point I was missing earlier)
It's actually depends in how you generated URL,
If you generated using JAVA,
Set the generated url as endpoint
Select PUT request,
Body -> binary -> Select file
If you generated using PYTHON,
Create a POST request, and use form-data to enter in all the fields you got back, with exactly the same field names you got back in the signedURL shown above.
Do not set the content type, however. Then add one more key named “file”:
Refer the accepted answer picture

Google Cloud Storage: CORS settings doesn't work for signed URLs

The response of PUT request with signed URL doesn't contain header Access-Control-Allow-Origin.
import os
from datetime import timedelta
import requests
from google.cloud import storage
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = <path to google credentials>
client = storage.Client()
bucket = client.get_bucket('my_bucket')
policies = [
{
'origin': ['*'],
'method': ['PUT'],
}
]
bucket.cors = policies
bucket.update()
blob = bucket.blob('new_file')
url = blob.generate_signed_url(timedelta(days=30), method='PUT')
response = requests.put(url, data='some data')
for header in response.headers.keys():
print(header)
Output:
X-GUploader-UploadID
ETag
x-goog-generation
x-goog-metageneration
x-goog-hash
x-goog-stored-content-length
x-goog-stored-content-encoding
Vary
Content-Length
Date
Server
Content-Type
Alt-Svc
As you can see there is no CORS-headers. So, can I conclude that GCS doesn't support CORS properly/fully?
Cross Origin Resource Sharing (CORS) allows interactions between resources from different origins. By default, in Google Cloud Storage it is prohibited/disabled in order to prevent malicious behavior.
You can enable it either using Cloud Libraries, Rest API or Cloud SDK, keeping in mind following rules:
Authenticate using user/service account with the permissions for Cloud Storage type: FULL_CONTROL.
Using XML API to get proper CORS headers, use one of the two URLs:
- storage.googleapis.com/[BUCKET_NAME]
- [BUCKET_NAME].storage.googleapis.com
Origin storage.cloud.google.com/[BUCKET_NAME] will not respond with CORS header.
Request need proper ORIGIN header to match bucket policy ORIGIN configuration as stated in the point 3 of the CORS troubleshooting documentation, in case of your code:
headers = {
'ORIGIN': '*'
}
response = requests.put(url, data='some data', headers=headers)
for header in response.headers.keys():
print(header)
gives following output:
X-GUploader-UploadID
ETag
x-goog-generation
x-goog-metageneration
x-goog-hash
x-goog-stored-content-length
x-goog-stored-content-encoding
Access-Control-Allow-Origin
Access-Control-Expose-Headers
Content-Length
Date
Server
Content-Type
I had this issue. For me the problem was I was using POST instead of PUT. Furthermore, I had to set the Content-Type of the upload to match the content type used to generate the form. The default Content-Type in the demo is "application/octet-stream", so I had to change it to be whatever was the content type of the upload. When doing the XMLHttpRequest, I just had to send the file directly instead of using FormData.
This was how I got the signed url.
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'application/octet-stream',
};
// Get a v4 signed URL for uploading file
const [url] = await storage
.bucket("lsa-storage")
.file(upload.id)
.getSignedUrl(options as any);

Retuning stream in AWS API Gateway -> Lambda function?

I have created an API using AWS api gateway like https://api.mydomain.com/v1/download?id=1234". The download resource has GET method. And the GET method is invoking lambda function using Lambda Proxy Integration.
The Lambda function needs to act as Proxy. It needs to resolve correct backend endpoint based on header x-clientId and then forward the request to that backend endpoint and return response as it is. So it needs to be generic to handle GET request of different content-type.
My lambda function looks like ( .NET Core)
public async Task<APIGatewayProxyResponse> Route(APIGatewayProxyRequest input, ILambdaContext context)
{
var clientId = headers["x-clientId"];
var mappings = new Mappings();
var url = await mappings.GetBackendUrl(clientId, input.Resource);
var httpClient = new HttpClient();
var response = await httpClient.GetAsync(url);
response.EnsureSuccessStatusCode();
var proxyResponse = new APIGatewayProxyResponse()
{
Headers = new Dictionary<string, string>(),
StatusCode = (int)System.Net.HttpStatusCode.OK,
IsBase64Encoded = false,
Body = await response.Content.ReadAsString())
};
}
The handler above works as long as request and response's content-type is application/json or application/xml. However i am not sure how to handle response when backend returns stream.
For download API, the backend returns Content-Disposition: attachment; filename="somefilename and ContentType may be one of the following:
application/pdf
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
application/vnd.openxmlformats-officedocument.wordprocessingml.document
application/x-zip-compressed
application/octet-stream
For these streams, How do i set APIGatewayProxyResponse.Body?
For Excel file I have tried setting body like below
var proxyResponse = new APIGatewayProxyResponse()
{
Headers = new Dictionary<string, string>(),
StatusCode = (int)System.Net.HttpStatusCode.OK,
IsBase64Encoded = true,
Body = Convert.ToBase64String(await response.Content.ReadAsByteArrayAsync())
};
proxyResponse.Headers.Add("Content-Type", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
proxyResponse.Headers.Add("Content-Disposition", "attachment; filename=\"Report.xlsx\"");
When i access the Url from the browser and try to open the file. I get error
Excel cannot open the fileReport.xlsxbecuase the file format or file extension is not valid. Verify that the file has not been corrupted and that the extention matches the format of the file
I think the issue is how i am setting the response body
Update 1
So based on AWS doc Binary Data Now Supported by API Gateway. Now as per the documentation
you can specify if you would like API Gateway to either pass the
Integration Request and Response bodies through, convert them to text
(Base64 encoding), or convert them to binary (Base64 decoding). These
options are available for HTTP, AWS Service, and HTTP Proxy
integrations. In the case of Lambda Function and Lambda Function Proxy
Integrations, which currently only support JSON, the request body is
always converted to JSON.
I am using Lambda Function Proxy, which currently support JSON. However the example here shows how to do it with Lambda Proxy.
I think what i am missing here is Binary Media Types setting and Method Response settings. Below is my setting. Not sure if these settings are correct
Binary Media
Method Response
here how solved it
1>add Binary Media Types. API->Settings->Binary Media Types -> add
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
2>In Method Response Add Content-Disposition and Content-Type headers for thestatus 200
3>In Integration Response map these headers to headers that are coming from the backend. And also set content handling convert to binary. (our backend api is returning file blob in body)

how can I fix conflicting query string params for S3 uploads?

I'm attempting to upload raw image data to S3 in the context of a react-native app.
I have the raw data correct and for the most part I think my code inside react native is working correctly to capture image data.
On my rails server, I'm using the amazon ruby gem to build the details of the url and associated authentication data required to post data to the bucket in question which I'm then rendering into react-native just like a regular react web front end.
# inside the rails server controller
s3_data = S3_BUCKET.presigned_post(key: "uploads/#{SecureRandom.uuid}/${filename}", success_action_status: '201', acl: 'public-read', url: 'https://jd-foo.s3-us-west-2.amazonaws.com')
render json: {s3Data: {fields: s3_data.fields, url: s3_data.url}}
At the moment I attempt to post to S3, I'm using ES6 fetch like the below to build my http request.
saveImage(data) {
var url = data.url
var fields = data.fields
var headers = {'Content-Type': 'multipart/form-data'}
var body = `x-amz-algorithm=${encodeURIComponent(fields['x-amz-algorithm'])}&` +
`x-amz-credential=${encodeURIComponent(fields['x-amz-credential'])}&` +
`x-amz-date=${encodeURIComponent(fields['x-amz-date'])}&` +
`x-amz-signature=${encodeURIComponent(fields['x-amz-signature'])}&` +
`acl=${encodeURIComponent(fields['acl'])}&` +
`key=${encodeURIComponent(fields['key'])}&` +
`policy=${encodeURIComponent(fields['policy'])}&` +
`success_action_status=${encodeURIComponent(fields['success_action_status'])}&` +
`file=${encodeURIComponent('12foo')}`
console.log(body);
return fetch(url, {method: 'POST', body: body, headers: headers})
.then((res) => {console.log('s3 inside api res', res['_bodyText']) ; res.json()} );
}
the logging of the body looks like
x-amz-algorithm=AWS4-HMAC-SHA256&x-amz-credential=AKIAJJ22D4PSUNBB5RAQ%2F20151027%2Fus-west-1%2Fs3%2Faws4_request&x-amz-date=20151027T223159Z&x-amz-signature=42b09d7ae134f803b10ef72d220fe74a630a3f826c7f1f625448277d0a6d93c7&acl=public-read&key=uploads%2F46be8ca3-6d3a-4bb7-a658-f2c8e058bc28%2F%24%7Bfilename%7D&policy=eyJleHBpcmF0aW9uIjoiMjAxNS0xMC0yN1QyMzozMTo1OVoiLCJjb25kaXRpb25zIjpbeyJidWNrZXQiOiJqZC1mb28ifSxbInN0YXJ0cy13aXRoIiwiJGtleSIsInVwbG9hZHMvNDZiZThjYTMtNmQzYS00YmI3LWE2NTgtZjJjOGUwNThiYzI4LyJdLHsic3VjY2Vzc19hY3Rpb25fc3RhdHVzIjoiMjAxIn0seyJhY2wiOiJwdWJsaWMtcmVhZCJ9LHsieC1hbXotY3JlZGVudGlhbCI6IkFLSUFKSjIyRDRQU1VOQkI1UkFRLzIwMTUxMDI3L3VzLXdlc3QtMS9zMy9hd3M0X3JlcXVlc3QifSx7IngtYW16LWFsZ29yaXRobSI6IkFXUzQtSE1BQy1TSEEyNTYifSx7IngtYW16LWRhdGUiOiIyMDE1MTAyN1QyMjMxNTlaIn1dfQ%3D%3D&success_action_status=201&file=12foo
It seems like my problems could be tied to both
Bad format of the post body including problems with special characters
Not providing S3 with enough data in post body including keys and other information, the documentation feels a bit unclear about what is/is not required.
The error back from S3 servers looks like
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>MalformedPOSTRequest</Code><Message>The body of your POST request is not well-formed multipart/form-data.</Message> <RequestId>DCE88AC349D7B2E8</RequestId><HostId>AKE1xctETuZMAhBFLfyuFlDxikYUlbAC7YufkM7h8Z8eVQdtLA25Z0Od/a4cMUbfW1nWnGjc+vM=</HostId></Error>
I'm pretty unclear on what my actual problems are and where I should be digging in.
Any input would be greatly appreciated.
<Message>The body of your POST request is not well-formed multipart/form-data.</Message>
It may not be that you're missing values from the body. The most significant issue here is that the structure of your body does not resemble multipart/form-data.
See RFC 2388 for how multipart/form-data works. (Or find a library that builds this for you.)
What you are sending looks more like the application/x-www-form-urlencoded format, which is used by some AWS APIs, but not S3.
There is an example in the S3 docs showing what an example POST body might look like. You should see a substantial difference there.
Note also that POST is intended for browser based uploads. If you are uoloading from code, you're doing a lot of extra work. PUT Object is much more straightforward. The request body is the binary file contents. Or, if this will eventually be done by a browser, then test it with a browser, and let the browser build your form.

Amazon Product API - URL response

is there anyway to dynamically generate the response for the Amazon Product API with using just a URL string?
I see there are PHP and C# libraries but I am just trying to browse to a URL and see the response. I noticed one of the required fields of the URL is a timestamp which makes this tricky. The following page helped to generate the URLs but I can't seem to find a way to do it dynamically?
http://associates-amazon.s3.amazonaws.com/scratchpad/index.html
Thanks!
This is the dynamic search for amazon product
Download aws_signed_request.php from this url
include('aws_signed_request.php');
$public_key = 'xxxxxxxx';
$private_key = 'xxxxxxxxxx';
$associate_tag = 'xxxxxx';
$keywords= 'PHP';
$search_index = 'Books';
// generate signed URL
$request = aws_signed_request('com', array(
'Operation' => 'ItemSearch',
'Keywords' => "Php Books",
"SearchIndex" => "Books",
"Count" => '24',
'ResponseGroup' => 'Large,EditorialReview'), $public_key, $private_key, $associate_tag);
// do request (you could also use curl etc.)
$response = #file_get_contents($request);
Documentation URL HERE
I'm not sure I totally understand your question, but I think the answer is "the data in the Amazon product API is only available via 'signed' URLs". That way, Amazon can track abuse, etc back to the source (i.e. the signer).
If it were possible to get the data with a "static" URL, then you could post that URL all over the Internet and anyone could get the data without signing up with Amazon. It's their data, and they have rules on it's use, so that wouldn't fly with them.
That said, you can usually create URLs with a timestamp in the future (months or even years). But you would still be responsible for it's use/abuse.