Invalid Date When Uploading to AWS S3 - amazon-web-services

I'm trying to upload an image to S3 via putObject and a pre-signed URL.
Here is the URL that was provided when I generated the pre-signed URL:
https://<myS3Bucket>.s3.amazonaws.com/1ffd1c88-5661-48f9-a135-04bd569614dd.jpg?AWSAccessKeyId=<accessKey>&Expires=1458177431311&Signature=<signature>-amz-security-token=<token>
When I attempt to upload the file via a PUT, AWS responds with:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid date (should be seconds since epoch): 1458177431311</Message>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
Here is the curl version of the request I was using:
curl -X PUT -H "Cache-Control: no-cache" -H "Postman-Token: 78e46be3-8ecc- 4156-be3d-7e2f4688a127" -H "Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW" -F "file=#[object Object]" "https://<myS3Bucket>.s3.amazonaws.com/1ffd1c88-5661-48f9-a135-04bd569614dd.jpg?AWSAccessKeyId=<accessKey>&Expires=1458177431311&Signature=<signature>-amz-security-token=<mySecurityToken>"
Since the timestamp is generated by AWS, it should be correct. I have tried changing it to include decimals and got the same error.
Could the problem be in the way I'm uploading the file in my request?
Update - Add code for generating the signed URL
The signed URL is being generated via the AWS Javascript SDK:
var AWS = require('aws-sdk')
var uuid = require('node-uuid')
var Promise = require('bluebird')
var s3 = new AWS.S3()
var params = {
Bucket: bucket, // bucket is stored as .env variable
Key: uuid.v4() + '.jpg' // file is always a jpg
}
return new Promise(function (resolve, reject) {
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
reject(new Error(err))
}
var payload = { url: url }
resolve(payload)
})
})
My access key and secret key are loaded via environment variables as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

Amazon S3 only supports 32 bit timestamps in signed urls:
2147483647 is the largest timestamp you can have.
I was creating timestamps 20 years in the future and it was breaking my signed URL's. Using a value less than 2,147,483,647 fixes the issue.
Hope this helps someone!

You just get bitten by bad documentation. Seems relate to this link
Amazon S3 invalid date when using expires in url_for
The integer are mean for " (to specify the number of seconds after the current time)".
Since you enter the time in epoch(0 = 1970-1-1), So it is current epoch time + your time, 46+46 years= 92 years. Appear to be crazy expiration period for s3.

Related

Uploading Base64 file to S3 signed URL

I need to upload an image to S3 using signed URL. I have the image in a base64 string. The below code runs without throwing any error, but at the end I see a text file with base64 content in the S3, not the binary image.
Can you please point out what I am missing?
Generate Signed URL (Lambda function JavaScript)
const signedUrlExpireSeconds = 60 * 100;
var url = s3.getSignedUrl("putObject", {
Bucket: process.env.ScreenshotBucket,
Key: s3Key,
ContentType: "image/jpeg",
ContentEncoding: "base64",
Expires: signedUrlExpireSeconds,
});
Upload to S3 (Java Code)
HttpRequest request = HttpRequest.newBuilder().PUT(HttpRequest.BodyPublishers.ofString(body))
.header("Content-Encoding", "base64").header("Content-Type", "image/jpeg").uri(URI.create(url)).build();
HttpResponse<String> response = httpClient.send(request, HttpResponse.BodyHandlers.ofString());
if (response.statusCode() != 200) {
throw new Exception(response.body());
}
I am not familiar with the AWS JavaScript SDK. But it seems that setting the 'Content-Type' metadata of the object (not the Content-Type of the putObject HTTP request) to 'image/jpeg' should do the trick.
Fixed it while just playing around with the combinations.
HttpRequest request = HttpRequest.newBuilder().PUT(HttpRequest.BodyPublishers.ofString(body))
Changed to
HttpRequest request = HttpRequest.newBuilder().PUT(HttpRequest.BodyPublishers.ofByteArray(body))

x-googl-acl isn't making uploaded files public

Currently I have been trying to upload objects (videos) to my Google Storage Cloud. I have found out the reason (possibly) that I haven't been able to make them public is due to ACL or IAM permission. The way it's currently done is I get a signedUrl from the backend as
const getGoogleSignedUrl = async (root, args, context) => {
const { filename } = args;
const googleCloud = new Storage({
keyFilename: ,
projectId: 'something'
});
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'video/quicktime',
extensionHeaders: {'x-googl-acl': 'public-read'}
};
const bucketName = 'something';
// Get a v4 signed URL for uploading file
const [url] = await googleCloud
.bucket(bucketName)
.file(filename)
.getSignedUrl(options);
return { url };
}
Once I have gotten temporary permission from the backend as a url I try to make a put request to uploaded the file as:
const response = await fetch(url, {
method: 'PUT',
body: blob,
headers: {
'x-googl-acl': 'public-read',
'content-type': 'video/quicktime'
}
}).then(res => console.log("thres is ", res)).catch(e => console.log(e));
Even though the file does get uploaded to google cloud storage. It is always showing public access as Not public. Any help would be helpful since I am starting to not understand how making an object public with google cloud works.
Within AWS (previously) it was easy to make an object public by adding x-aws-acl to a put request.
Thank you in advance.
Update
I have changed the code to reflex currently what it looks like. Also when I look at the object in Google Storage after it's been uploaded I see
Public access Not authorized
Type video/quicktime
Size 369.1 KB
Created Feb 11, 2021, 5:49:02 PM
Last modified Feb 11, 2021, 5:49:02 PM
Hold status None
Retention policy None
Encryption type Google-managed key
Custom time —
Public URL Not applicable
Update 2
As stated. The issue where I wasn't able to upload the file after trying to add the recommended header was because I wasn't providing the header correctly. I changed the header from x-googl-acl to x-goog-acl which has allowed me to upload it to the cloud.
New problem is now Public access is showing as Not authorized
Update 3
In order to try something new I followed the direction listed here https://www.jhanley.com/google-cloud-setting-up-gcloud-with-service-account-credentials/. Once I finished everything. The next steps I took was
1 - Upload a new video to the cloud. This will be done using the new json provided
const googleCloud = new Storage({
keyFilename: json_file_given),
projectId: 'something'
});
Once the file has been uploaded I noticed there was no changes in regards to it being public. Still has Public access Not authorized
2 - Once checking the status of the object uploaded I went on to follow similar approach on the bottom to make sure I am using the same account as the json object that uploaded the file.
gcloud auth activate-service-account test#development-123456.iam.gserviceaccount.com --key-file=test_google_account.json
3 - Once I noticed I am using the right person with the right permission I performed the next step of
gsutil acl ch -u AllUsers:R gs://example-bucket/example-object
This is return actually resulted in a response of No changes to gs://crit_bull_1/google5.mov

Call S3 pre-signed URL with postman

I am attempting to use a pre-signed URL to upload as described in the docs (https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html) I can retrieve the pre-signed URL but when I attempt to do a PUT in Postman, I receive the following error:
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
Obviously, the way my put call is structured doesn't match with the way AWS is calculating the signature. I can't find a lot of information on what this put call requires.
I've attempted to modify the header for Content-Type to multipart/form-data and application/octet-stream. I've also tried to untick the headers section in postman and rely on the body type for both form-data and binary settings where I select the file. The form-data setting results in the following added to the call:
Content-Disposition: form-data; name="thefiletosend.txt"; filename="thefiletosend.txt
In addition, I noticed that postman is including what it calls "temporary headers" as follows:
Host: s3.amazonaws.com
Content-Type: text/plain
User-Agent: PostmanRuntime/7.13.0
Accept: /
Cache-Control: no-cache
Postman-Token: e11d1ef0-8156-4ca7-9317-9f4d22daf6c5,2135bc0e-1285-4438-bb8e-b21d31dc36db
Host: s3.amazonaws.com
accept-encoding: gzip, deflate
content-length: 14
Connection: keep-alive
cache-control: no-cache
The Content-Type header may be one of the issues, but I'm not certain how to exclude these "temporary headers" in postman.
I am generating the pre-signed URL in a lambda as follows:
public string FunctionHandler(Input input, ILambdaContext context)
{
_logger = context.Logger;
_key = input.key;
_bucketname = input.bucketname;
string signedURL = _s3Client.GetPreSignedURL(new GetPreSignedUrlRequest()
{
Verb = HttpVerb.PUT ,
Protocol = Protocol.HTTPS,
BucketName = _bucketname,
Key = _key,
Expires = DateTime.Now.AddMinutes(5)
});
returnObj returnVal = new returnObj() { url = signedURL };
return JsonConvert.SerializeObject(returnVal);
}
Your pre-signed url should be like https://bucket-name.s3.region.amazonaws.com/folder/filename.jpg?AWSAccessKeyId=XXX&Content-Type=image%2Fjpeg&Expires=XXX&Signature=XXX
You can upload to S3 with postman by
Set above url as endpoint
Select PUT request,
Body -> binary -> Select file
I was able to get this working in Postman using a POST request. Here are the details of what worked for me. When I call my lambda to get a presigned URL here is the json that comes back (after I masked sensitive and app-specific information):
{
"attachmentName": "MySecondAttachment.docx",
"url": "https://my-s3-bucket.s3.amazonaws.com/",
"fields": {
"acl": "public-read",
"Content-Type": "multipart/form-data",
"key": "attachment-upload/R271645/65397746_MySecondAttachment.docx",
"x-amz-algorithm": "AWS4-HMAC-SHA256",
"x-amz-credential": "WWWWWWWW/20200318/us-east-1/s3/aws4_request",
"x-amz-date": "20200318T133309Z",
"x-amz-security-token": "XXXXXXXX",
"policy": "YYYYYYYY",
"x-amz-signature": "ZZZZZZZZ"
}
}
In Postman, create a POST request, and use “form-data” to enter in all the fields you got back, with exactly the same field names you got back in the signedURL shown above. Do not set the content type, however. Then add one more key named “file”:
To the right of the word file if you click the drop-down you can browse to your file and attach it:
In case it helps, I’m using a lambda written in python to generate a presigned URL so a user can upload an attachment. The code looks like this:
signedURL = self.s3.generate_presigned_post(
Bucket= "my-s3-bucket",
Key=putkey,
Fields = {"acl": "public-read", "Content-Type": "multipart/form-data"},
ExpiresIn = 15,
Conditions = [
{"acl": "public-read"},
["content-length-range", 1, 5120000]
]
)
Hope this helps.
Your pre-signed url should be like https://bucket-name.s3.region.amazonaws.com/folder/filename.jpg?AWSAccessKeyId=XXX&Content-Type=image%2Fjpeg&Expires=XXX&Signature=XXX
You can upload to S3 with postman by
Set above url as endpoint
Select PUT request,
Body -> binary -> Select file
I was facing the same problem and below is how it worked for me.
Note, I am making signed URL by using AWS S3 Java SDK as my backend is in Java. I gave content type as "application/octet-stream" while creating this signed Url so that any type of content can be uploaded. Below is my java code generating signed url.
public String createS3SignedURLUpload(String bucketName, String objectKey) {
try {
PutObjectRequest objectRequest = PutObjectRequest.builder().bucket(bucketName).key(objectKey)
.contentType("**application/octet-stream**").build();
S3Presigner presigner = S3Presigner.builder().region(s3bucketRegions.get(bucketName))
.credentialsProvider(StaticCredentialsProvider.create(awsBasicCredentials)).build();
PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(presignedURLTimeoutInMins)).putObjectRequest(objectRequest)
.build();
PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);
return presignedRequest.url().toString();
} catch (Exception e) {
throw new CustomRuntimeException(e.getMessage());
}
}
## Now to upload file using Postman
Set the generated url as endpoint
Select PUT request,
Body -> binary -> Select file
Set header Content-Type as application/octet-stream (This point I was missing earlier)
It's actually depends in how you generated URL,
If you generated using JAVA,
Set the generated url as endpoint
Select PUT request,
Body -> binary -> Select file
If you generated using PYTHON,
Create a POST request, and use form-data to enter in all the fields you got back, with exactly the same field names you got back in the signedURL shown above.
Do not set the content type, however. Then add one more key named “file”:
Refer the accepted answer picture

SignatureDoesNotMatch on S3 PUT Request to Presigned URL

I am generating a presigned URL server-side to allow my client application to upload a file directly to the S3 bucket. Everything works fine unless the client application is running on a computer in a timezone that is technically a day ahead of my server clock.
I can recreate the issue locally by setting my system clock ahead to a timezone on the next day.
Here is how I am generating the presigned URL using the .NET SDK (I originally had DateTime.Now instead of UTCNow):
var request = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = objectName,
Verb = HttpVerb.PUT,
Expires = DateTime.UtcNow.AddDays(5),
ContentType = "application/octet-stream"
};
request.Headers["x-amz-acl"] = "bucket-owner-full-control";
request.Metadata.Add("call", JsonConvert.SerializeObject(call).ToString());
return client.GetPreSignedURL(request);
and then I am using that presigned URL in the client application like this:
using (var fileStream = new FileStream(recordingPath, FileMode.Open))
using (var client = new WebClient())
{
HttpContent fileStreamContent = new StreamContent(fileStream);
var bytes = await fileStreamContent.ReadAsByteArrayAsync();
client.Headers.Add("Content-Type", "application/octet-stream");
//include metadata in PUT request
client.Headers.Add("x-amz-meta-call", JsonConvert.SerializeObject(Call));
await client.UploadDataTaskAsync(new Uri(presignedUrl), "PUT", bytes);
}
Here is the error I am receiving from AWS:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>{access}</AWSAccessKeyId><StringToSign>....
The requests appear mostly identical to me in Fiddler.
Works:
PUT https://{bucketname}.s3.amazonaws.com/1c849c76-dd2a-4ff7-aad7-23ec7e9ddd45_encoded.opus?X-Amz-Expires=18000&x-amz-security-token={security_token}&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={cred}&X-Amz-Date=20190312T021419Z&X-Amz-SignedHeaders=content-type;host;x-amz-acl;x-amz-meta-call;x-amz-security-token&X-Amz-Signature={sig} HTTP/1.1
x-amz-meta-call: {json_string}
x-amz-acl: bucket-owner-full-control
Content-Type: application/octet-stream
Host: {bucketname}.s3.amazonaws.com
Content-Length: 28289
Expect: 100-continue
{file}
Does not work:
PUT https://{bucketname}.s3.amazonaws.com/4cca3ec3-9f3f-4ba4-9d81-6336090610c0_encoded.opus?X-Amz-Expires=18000&x-amz-security-token={security_token}&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={credentials}&X-Amz-Date=20190312T021541Z&X-Amz-SignedHeaders=content-type;host;x-amz-acl;x-amz-meta-call;x-amz-security-token&X-Amz-Signature={sig} HTTP/1.1
x-amz-meta-call: {json_string}
x-amz-acl: bucket-owner-full-control
Content-Type: application/octet-stream
Host: {bucketname}.s3.amazonaws.com
Content-Length: 18714
Expect: 100-continue
{file}
In both scenarios, the presigned URL has the same x-amz-date parameter generated. I have even tried parsing out the x-amz-date parameter from the URL and explicitly setting it as a header in my PUT but that did not work either.
What am I missing?
It turned out to me that I was using a different version of the signature.
v4 worked perfectly for me.
In JS, require S3 as
const s3 = new AWS.S3({
signatureVersion: 'v4'
});
So the issue ended up being within the metadata. In our setup, we had the client application posting a JSON string up to our API along with the file to generate the presigned URL. We were using Json.net to deserialize into the C# class:
var call = JsonConvert.DeserializeObject<Call>(request.Params["metadata"]);
Apparently, this call converts any timestamps in the Json to local time. This means that we would sign the URL with metadata timestamps local to the API server, but actually upload the file with metadata timestamps local to the client. This difference is why the calculated signatures are different.

Specify byte range via query string in Get Object S3 request

I'm familiar with the Range HTTP header; however, the interface I'm using to query S3 (an img element's .src property) doesn't allow me to specify HTTP headers.
Is there a way for me to specify my desired range via a parameter in the query string?
It doesn't seem like there is, but I'm just holding out a shred of hope before I roll my own solution with ajax requests.
Amazon S3 supports Range GET requests, as do some HTTP servers, for example, Apache and IIS.
How CloudFront Processes Partial Requests for an Object (Range GETs)
I tried to get my S3 object via cURL:
curl -r 0-1024 https://s3.amazonaws.com/mybucket/myobject -o part1
curl -r 1025- https://s3.amazonaws.com/mybucket/myobject -o part2
cat part1 part2 > myobject
and AWS SDK for JavaScript:
var s3 = new AWS.S3();
var file = require('fs').createWriteStream('part1');
var params = {
Bucket: 'mybucket',
Key: 'myobject',
Range: 'bytes=0-1024'
};
s3.getObject(params).createReadStream().pipe(file);
These two methods work fine for me.
AWS SDK for JavaScript API Reference (getObject)
Below is the Java Code with AWS V2 SDK
format the range as below
var range = String.format("bytes=%d-%d", start, end);
and pass it in below api with GetObjectRequest builder
ResponseBytes<GetObjectResponse> currentS3Obj = client.getObjectAsBytes(GetObjectRequest.builder().bucket(bucket).key(key).range(range).build());
return currentS3Obj.asInputStream();