How can I view the direct link on an Amazon S3 browser? - amazon-web-services

I'm hosting my data in an Amazon S3 bucket. My client wants to see the link directly in the browser instead of downloading it, someone knows how to edit this from within the account.
https://s3.ap-southeast-1.amazonaws.com/faceangiang/uploads/photos/2021/06/fag_25852ed53ec643754a1b5ff366d55128.png

If we grab the header for the URL, we can see that the content-type of the image is application/octet-stream
curl -X HEAD -I https://s3.ap-southeast-1.amazonaws.com/faceangiang/uploads/photos/2021/06/fag_25852ed53ec643754a1b5ff366d55128.png
Giving the response of:
HTTP/1.1 200 OK
x-amz-id-2: Vjf6IS4POj2NMre0IJaHZeQMcJyFd/AMgFl77kK5sdWZakpfSHUeycZj1/8619C3rArd3QKunwk=
x-amz-request-id: JSW0VBMFXDDKFJGZ
Date: Sat, 26 Jun 2021 08:40:37 GMT
Last-Modified: Sat, 26 Jun 2021 08:10:49 GMT
ETag: "82dfa35719adef79821b4e0f90c74ab7"
Accept-Ranges: bytes
Content-Type: application/octet-stream
Server: AmazonS3
Content-Length: 169464
In order for an image to be displayed by your browser you have to set the content type to be something like image/png.
The content type can be set when we upload an image using PutObject.
Also, there are ways to change to content type for existing images in a bucket according to this answer:
aws s3 cp \
s3://BUCKET-NAME/ \
s3://BUCKET-NAME/ \
--exclude '*' \
--include '*.png' \
--no-guess-mime-type \
--content-type="image/png" \
--metadata-directive="REPLACE" \
--recursive
This will change the content type recursively for all .png images.

It sounds like the object was uploaded to Amazon S3 without any identification as to the content type.
If you navigate to the object in Amazon S3 and look at the Metadata, it should be showing the Content-Type as image/png:
This tells the web browser the type of file that is being retrieved. If this is missing, the browser will download the file instead. (Well, it actually downloads it in both situations, but if it knows the file is a png it will display it rather than offering to save it to disk.)
The Content-Type can be provided when a file is uploaded to S3. If you upload via the management console or the AWS CLI, it is automatically set for you.

Related

Checking if AWS S3 presigned link exists using wget --spider

I've read several threads on SO about checking whether a URL exists or not in bash, e.g. #37345831, and the recommended solution was to use wget with --spider. However, the --spider option appears to fail when used with AWS S3 presigned URLs.
Calling:
wget -S --spider "${URL}" 2>&1
Results in:
HTTP request sent, awaiting response...
HTTP/1.1 403 Forbidden
x-amz-request-id: [REF]
x-amz-id-2: [REF]
Content-Type: application/xml
Date: [DATE]
Server: AmazonS3
Remote file does not exist -- broken link!!!
Whereas the following returns as expected, HTTP/1.1 200 OK, for the same input URL:
wget -S "${URL}" -O /dev/stdout | head
The version of wget I'm running is:
GNU Wget 1.20.3 built on linux-gnu.
Any clue as to what's going on?
Any clue as to what's going on?
There exist few HTTP request methods also known as HTTP verbs, for this case 2 of them are relevant
GET
HEAD
when not instructed otherwise wget does make first of them, when --spider option is used second one is used, to which server should respond with just headers (no body).
AWS S3 presigned link
According to Signing and authenticating REST requests - Amazon Simple Storage Service one of step of preparing is as follows
StringToSign = HTTP-Verb + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
CanonicalizedAmzHeaders +
CanonicalizedResource;
therefore we might conclude that AWS S3 presigned link will be working with exactly 1 of HTTP verbs. One you have is for GET. Consult whoever crafted that link to furnish you with AWS S3 presigned link made for HEAD if you wish to use --spider successfully.

AWS S3 PUT Example using REST API

The AWS S3 PUT REST API docs are lacking a clear example of the Authorization string in the Request Syntax.
Request Syntax
PUT /Key+ HTTP/1.1
Host: Bucket.s3.amazonaws.com
x-amz-acl: ACL
Cache-Control: CacheControl
Content-Disposition: ContentDisposition
Content-Encoding: ContentEncoding
Content-Language: ContentLanguage
Content-Length: ContentLength
Content-MD5: ContentMD5
Content-Type: ContentType
Expires: Expires
x-amz-grant-full-control: GrantFullControl
x-amz-grant-read: GrantRead
x-amz-grant-read-acp: GrantReadACP
x-amz-grant-write-acp: GrantWriteACP
x-amz-server-side-encryption: ServerSideEncryption
x-amz-storage-class: StorageClass
x-amz-website-redirect-location: WebsiteRedirectLocation
x-amz-server-side-encryption-customer-algorithm: SSECustomerAlgorithm
x-amz-server-side-encryption-customer-key: SSECustomerKey
x-amz-server-side-encryption-customer-key-MD5: SSECustomerKeyMD5
x-amz-server-side-encryption-aws-kms-key-id: SSEKMSKeyId
x-amz-server-side-encryption-context: SSEKMSEncryptionContext
x-amz-request-payer: RequestPayer
x-amz-tagging: Tagging
x-amz-object-lock-mode: ObjectLockMode
x-amz-object-lock-retain-until-date: ObjectLockRetainUntilDate
x-amz-object-lock-legal-hold: ObjectLockLegalHoldStatus
Body
The docs show this request example further on...
PUT /my-image.jpg HTTP/1.1
Host: myBucket.s3.<Region>.amazonaws.com
Date: Wed, 12 Oct 2009 17:50:00 GMT
Authorization: authorization string
Content-Type: text/plain
Content-Length: 11434
x-amz-meta-author: Janet
Expect: 100-continue
[11434 bytes of object data]
But again, the doc does not have an example format for Auth String. I tried AccessKeyID Secret but that didn't work. I dont' even see logical parameters in the request syntax to pass the two parts of the credential (AccessKeyID and Secret) anywhere in the examples!
Does anyone have a simple example of how to use PUT to add a .json file to S3 using the REST API? Preferrably a screenshot of PostMan setup to better explain where values go (in URL vs. as headers).
From the AWS docs here, it appears it is not possible to create a PUT request to an S3 bucket using REST API alone:
For authenticated requests, unless you are using the AWS SDKs, you have to write code to calculate signatures that provide authentication information in your requests.
This is a new concept to me. I've used token requests and sending keys in headers before when authenticating via REST API's. It sounds like a more secure method of auth.

Render image from S3 in IE11/Edge with Content-Type: application/octet-stream

I have a set of images stored on S3 that should be displayed on the browser. These images have a content type of application/octet-stream and can be viewed in Chrome and Firefox.
My understanding is that Internet Explorer cannot view application/octet-stream content, or it is unable to realize that the S3 object is actually an image.
I've tried to upload new versions of my images onto S3 and manually adding a Metadata Header/Value pair of Content-Type and image/png (using the S3 console, not CLI). However, I still see the same application/octet-stream on IE.
Is it possible to configure my application (Angular4/SpringBoot/Tomcat) to tell IE to look at that type of content, or am I looking at the wrong place in S3?
From my backend, where I set my AWS S3 api I was setting the response headers to 'application/octet-stream'.

S3's SSE-C headers being ignored?

I am attempting to use S3's server-side encryption for customer keys. I created a bucket which allows anonymous users to upload objects, and then attempted to upload an object like so:
$ http -v PUT 'https://BUCKETNAME.s3.amazonaws.com/test.txt' \
"x-amz-server-side​-encryption​-customer-algorithm:AES256" \
"x-amz-server-side​-encryption​-customer-key:BASE64DKEY" \
"x-amz-server-side​-encryption​-customer-key-MD5:EmqLRYqvItSQUzWCBAdF+A==" \
< ~/test.txt
PUT /test.txt HTTP/1.1
Accept: application/json
Accept-Encoding: gzip, deflate, compress
Content-Length: 20
Content-Type: application/json; charset=utf-8
Host: BUCKETNAME.s3.amazonaws.com
User-Agent: HTTPie/0.8.0
x-amz-server-side​-encryption​-customer-algorithm: AES256
x-amz-server-side​-encryption​-customer-key: BASE64KEY
x-amz-server-side​-encryption​-customer-key-MD5: EmqLRYqvItSQUzWCBAdF+A==
This is a test file
HTTP/1.1 200 OK
Content-Length: 0
Date: Wed, 21 Oct 2015 22:12:26 GMT
ETag: "5dd39cab1c53c2c77cd352983f9641e1"
Server: AmazonS3
x-amz-id-2: AUOQUfmHEwOPqqvDd5X7aTYk+SX043gVFvM3wlgbzfRcpQsXIxXOFjrTRAM+B2T9Ns6Z/C26lBg=
x-amz-request-id: 6063C14465E4B090
Everything seemed to be working, although the encryption headers didn't come back in the response. So, I attempted to fetch my new object:
$ curl 'https://BUCKETNAME.s3.amazonaws.com/test.txt'
This is a test file
Oh no! My encryption headers appear to have been completely ignored, and the object has been stored in plaintext. As far as I can tell from the documentation, I am uploading the object correctly. Any suggestions as to what I might be doing wrong?
What's really awful is that if I do a GET and include this key, I get back a 200. That's terrifying. I could easily have started using these calls and never noticed that no encryption was being performed.
I have discovered what I believe to be the cause. When I upload objects to S3 anonymously (but providing a key), the server-side encryption credentials are completely ignored, and my object is stored in plaintext. The credentials are also ignored when downloading, so the download works fine.
However, if I authenticate as any AWS user, the headers are respected, and my object is stored with appropriate encryption.
So, important note to SSE-C users: make sure you don't upload any objects anonymously, or the whole feature silently ignores your encryption keys entirely!

Amazon S3 and CloudFront - compressed files

concerning Amazon's documentation it is possible to serve compressed files via CloudFront/S3, if I upload a compressed and an uncompressed version of the same file. Both files need to have the same content type, the compressed additionally needs to have Content-Encoding set to "gzip".
So now I have two files on S3:
https://s3-eu-west-1.amazonaws.com/kiga-client/gzip/client/config.js
https://s3-eu-west-1.amazonaws.com/kiga-client/gzip/client/config.js.gz
On my website I create a link to CloudFront which links to the config.js on
https://d1v5g5yve3hx29.cloudfront.net/gzip/client/config.js
I would now expect that I get automatically a compressed file when the client sends Accept-Encoding "gzip" via:
curl -I -H 'Accept-Encoding: gzip,deflate' https://d1v5g5yve3hx29.cloudfront.net/gzip/client/config.js
Unfortunately I get the raw file returned:
HTTP/1.1 200 OK
Content-Type: application/x-javascript
Content-Length: 3509
Connection: keep-alive
Date: Wed, 26 Nov 2014 11:12:43 GMT
Cache-Control: max-age=31536000
Last-Modified: Wed, 26 Nov 2014 10:50:15 GMT
ETag: "c310121403754f4faab782504912c15c"
Accept-Ranges: bytes
Server: AmazonS3
Age: 2405
X-Cache: Hit from cloudfront
Via: 1.1 8a256bddd45845f932a0a374e95fa057.cloudfront.net (CloudFront)
X-Amz-Cf-Id: 4HRqstvYGYD1A-vfvltNrXGffg0D5XbFjSpoWReI5UNYf-2jQfE8jQ==
The response header Content-Encoding: gzip should be set but is missing.
To serve compressed files you need to actually request compressed file's URL from CloudFront. See pt. 5 here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html#CompressedS3
To be precise one actually has to compress the files manually and then upload them to S3 with the appropriate metadata.
Further more one must keep the original filename, although the file is compressed.
So given a file image.jpg which gets compress to image.jpg.gz one has to upload image.jpg.gz and rename it to image.jpg