Direct file upload to S3 using sigv4 - amazon-web-services

I'm looking for a secure way to directly upload large files to S3 (for performance issue).
After few hours of research, I've come to the (maybe incorrect) conclusion that I should use "Browser-Based Uploads Using POST".
As referenced in this thread:
Amazon S3 direct file upload from client browser - private key disclosure
Before trying this directly, I thought about making a cURL POC with direct upload and signature computation.
I've failed to make it work, and haven't found a successful POC on the web using Authentification Version 4 manual settings.
My signature is OK.
The only issue is that Amazon is double SHA256ing my file content and is thus not validation my x-amz-content-sha-256 header.
lower(SHA256(e8379a31b13fb9423928fe28dd41a5e3204a52072634503c31e8b3ea42605b46))
= 4fa84cd7d18e0d33dbd62d0492eca4a159e122391ae0a3e636bd3cf527680c87
I'm not sure of understanding what should I put in my cURL and canonical request payload (and the linked content-length value) and the x-amz-content-sha-256 header.
Should it all be the same values ?
If yes, then the Amazon doc specifies it should all be encrypted to SHA256, so I've no idea why Amazon reSHA256 my already SHA256 payload...
Error:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>XAmzContentSHA256Mismatch</Code>
<Message>The provided 'x-amz-content-sha256' header does not match what was computed.</Message><ClientComputedContentSHA256>e8379a31b13fb9423928fe28dd41a5e3204a52072634503c31e8b3ea42605b46</ClientComputedContentSHA256><S3ComputedContentSHA256>4fa84cd7d18e0d33dbd62d0492eca4a159e122391ae0a3e636bd3cf527680c87</S3ComputedContentSHA256>
<RequestId>419A185269B0F891</RequestId><HostId>QHWxK0Mzz6AfG44ypXBti3W0tYx1xkG9lZGqc2kUKyMF9STwP18M3racio0k06aH5+1ok/Irdn8=</HostId>
</Error>
cURL command:
curl
-v https://??.s3.amazonaws.com/recordtest/test.jpg
-H "Authorization: AWS4-HMAC-SHA256 Credential=??/20170228/eu-west-1/s3/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date, Signature=43750caa762314eb70aace1f7f8ae34633b93352aa25646433ef21e48dd79429" -H "Content-Length: 64"
-H "Content-Type: application/octet-stream"
-H "x-amz-content-sha256: e8379a31b13fb9423928fe28dd41a5e3204a52072634503c31e8b3ea42605b46"
-H "x-amz-date: 20170228T111828Z"
-d "e8379a31b13fb9423928fe28dd41a5e3204a52072634503c31e8b3ea42605b46"
-X PUT
Generated canonical request:
PUT
/recordtest/test.jpg
content-length:64
content-type:application/octet-stream
host:??.s3.amazonaws.com
x-amz-content-sha256:e8379a31b13fb9423928fe28dd41a5e3204a52072634503c31e8b3ea42605b46
x-amz-date:20170228T111200Z
content-length;content-type;host;x-amz-content-sha256;x-amz-date
e8379a31b13fb9423928fe28dd41a5e3204a52072634503c31e8b3ea42605b46

Related

Checking if AWS S3 presigned link exists using wget --spider

I've read several threads on SO about checking whether a URL exists or not in bash, e.g. #37345831, and the recommended solution was to use wget with --spider. However, the --spider option appears to fail when used with AWS S3 presigned URLs.
Calling:
wget -S --spider "${URL}" 2>&1
Results in:
HTTP request sent, awaiting response...
HTTP/1.1 403 Forbidden
x-amz-request-id: [REF]
x-amz-id-2: [REF]
Content-Type: application/xml
Date: [DATE]
Server: AmazonS3
Remote file does not exist -- broken link!!!
Whereas the following returns as expected, HTTP/1.1 200 OK, for the same input URL:
wget -S "${URL}" -O /dev/stdout | head
The version of wget I'm running is:
GNU Wget 1.20.3 built on linux-gnu.
Any clue as to what's going on?
Any clue as to what's going on?
There exist few HTTP request methods also known as HTTP verbs, for this case 2 of them are relevant
GET
HEAD
when not instructed otherwise wget does make first of them, when --spider option is used second one is used, to which server should respond with just headers (no body).
AWS S3 presigned link
According to Signing and authenticating REST requests - Amazon Simple Storage Service one of step of preparing is as follows
StringToSign = HTTP-Verb + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
CanonicalizedAmzHeaders +
CanonicalizedResource;
therefore we might conclude that AWS S3 presigned link will be working with exactly 1 of HTTP verbs. One you have is for GET. Consult whoever crafted that link to furnish you with AWS S3 presigned link made for HEAD if you wish to use --spider successfully.

call AWS Elasticsearch Service API with cURL --aws-sigv4

when I execute
curl --request GET "https://${ES_DOMAIN_ENDPOINT}/my_index_pattern-*/my_type/_mapping" \
--user $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY \
--aws-sigv4 "aws:amz:ap-southeast-2:es"
where $ES_DOMAIN_ENDPOINT is my AWS Elasticsearch endpoint, I'm getting the following response:
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."}
I'm confident that my $AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY are correct.
However, when I send the same postman request with the AWS Authentication and the parameters above, the response is coming through. I compared the verbose output of both requests and they have very minor differences, such as timestamps and signature.
I'm wondering, what is wrong with the --aws-sigv4 config?
This issue happens due to the* character in the path. There is a bug report in curl repository to fix this issue https://github.com/curl/curl/issues/7559.
Meanwhile, to mitigate the error you should either remove a * from the path or build curl from the branch https://github.com/outscale-mgo/curl-appimage/tree/http_aws_sigv4_encoding.

Call AWS CloundFront API with CURL

Is it possible to call AWS CloundFront API with CURL? I would like to trigger an invalidation without installing much dependency. But I'm unable to make a simple GET request to the API:
curl -v -X GET \
-H "Date: $(date -R)" \
-H "Authorization: AWS ${CLOUDFRONT_ACCESS_KEY}:$(echo -en ${CLOUDFRONT_ACCESS_KEY} | openssl sha1 -hmac ${CLOUDFRONT_SECRET_ACCESS_KEY} -binary | base64)" \
https://cloudfront.amazonaws.com/2020-05-31/distribution/EMC3WW4JXXXXX/invalidation/IXMUICGG7L77A
Results
<?xml version="1.0" encoding="UTF-8"?>
<ErrorResponse xmlns="http://cloudfront.amazonaws.com/doc/2020-05-31/">
<Error>
<Type>Sender</Type>
<Code>IncompleteSignature</Code>
<Message>Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization=AWS AKIAJG77PBXLMN5YQI7A:K62YyDlWiVf/yr44YSs7BbsQYDQ=</Message>
</Error>
<RequestId>f9e5b7de-bce6-4bfd-951e-2986ae5bc1a3</RequestId>
</ErrorResponse>
You can take a look at Signing AWS requests with Signature Version 4 for signing the request.
Plus the invalidation API call needs more params for making the call as per the documentation
I usually keep this handy s3-rest-api-with-curl

Upload to S3 bucket through API Gateway AWS Service Proxy

As in the title, I can't seem to get it to work, i'm following the high level guide detailed here but any images uploaded seem to be blank.
What i've set up:
/images/{object} - PUT
> Integration Request
AWS Region: ap-southeast-2
AWS Service: S3
AWS Subdomain [bucket name here]
HTTP method: PUT
Path override: /{object}
Execution Role [I have one set up]
> URL Path Paramaters
object -> method.request.path.object
I'm trying to use Postman to send a PUT request with Content-Type: image/png and the body is a binary upload of a png file.
I've also tried using curl:
curl -X PUT -H "Authorization: Bearer [token]" -H "Content-Type: image/gif" --upload-file ~/Pictures/bart.gif https://[api-url]/dev/images/cool.gif
It creates the file on the server and the size seems to be double what ever was uploaded, when viewed I just get "image has an error".
When I try with .txt files (content-type: text/plain) it seems to work though.
Any ideas?
After reading alot and chatting to AWS technical support, the problem seems to be that you can't do binary uploads through API Gateway as anything that passes through automatically goes through a UTF-8 encode.
There are a few workarounds for this I can think of, my solution will be to base64 the files before upload and trigger a lambda when they hit the bucket to decode them
This is a old post, but I got a solution.
AWS now support binary upload through APIGateway READ.
In general, go to your API settings, and add a Binary Media type.
After that, you can handle the file in base64

Does curl need some special parameters to enforce charset/encoding?

I'm using a curl command to send a large file (100+ MB) to a web service. I'm noticing that only when I send the file to the web service using curl the file gets mangled and data is lost.
Here is the command I'm using to send the file:
curl -v --raw -X POST -H "Transfer-Encoding: chunked" -H "Content-Type: text/xml; charset=UTF-8" -d #medline16n0736.xml "http://localhost:2323/TestWebService"
Am I missing something? I thought telling it to use text/xml and charset=UTF-8 would keep it UTF-8 once received by the web service.
You are asking curl to post the XML file using the -d option, which will post the file as if it were being submitted via an HTTP webform in application/x-www-webform-urlencoded format. To post the file by itself, use the -T option instead. Also, you are using the --raw option, which will disable handling of HTTP transfer encodings, even though you are sending a Transfer-Encoding: chunked header. Remove --raw and -T will detect the header to enable chunking.
You are also asking curl to send a Content-Type header to tell the WebService that the uploaded data is UTF-8 encoded XML. It is your responsibility to make sure the XML file is actually UTF-8 encoded. Curl won't check that for you.