Amazon S3 image hosting with Shopify - amazon-web-services

I have an AWS S3 bucket that I store product images on. I sell on multiple sales channels and use ChannelAdvisor to share all my product data to all the different sites. My image URLs are sent via ChannelAdvisor to the sites. Amazon reads my images fine, my website on Shopify does not read the images at all.
I think it's because how the images are shared. If you put the image URL in your browser, it downloads the image, but I want it to just show the image in the browser. I think this is my problem with Shopify.
Below is my current AWS policy, my question is how do I change the policy or shared URLs to make AWS load in the browser not download the image?
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket-name/*"
}
]
}

This is not a function of policy, but rather one of metadata. Browsers use the Content-Type response header to determine what kind of file is coming in, and how to handle it. For example, for a .png file, the content type needs to be set to image/png. You set this when uploading the files to S3.

Related

Restrict read-write access from my S3 bucket

I am hosting a website where users can write and read files, which are stored into another S3 Bucket. However, I want to restrict the access of these files only to my website.
For example, loading a picture.
If the request comes from my website (example.com), I want the read (or write if I upload a picture) request to be allowed by the AWS S3 storing bucket.
If the request comes from the user who directly writes the Object URL in his browser, I want the storing bucket to block it.
Right now, even with all I have tried, people can access ressources from the Object URL.
Here is my Bucket Policy:
{
"Version": "2012-10-17",
"Id": "Id",
"Statement": [
{
"Sid": "Sid",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": "arn:aws:s3:::storage-bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://example.com/*"
}
}
}
]
}
Additionnal informations:
All my "Block public access" are unchecked as you can see here. (I think that the problem comes from here. When I check the two boxes about ACL, my main problem is fixed, but I got a 403 error - Forbidden - when it comes to upload files to the Bucket, another problem);
My ACL looks like this;
My website is statically hosted on another S3 Bucket.
If you need more informations or details, ask me.
Thank you in advance for your answers.
This message has been written by a French speaking guy. Sorry for the mistakes
"aws:Referer": "http://example.com/*
The referer is an http header passed by the browser and any client could just freely set the value. It provides no real security
However, I want to restrict the access of these files only to my website
Default way restrict access to S3 resources for a website is using the pre-signed url. Basically your website backend can create an S3 url to download or upload an s3 object and pass the url only to authenticated /allowed client. Then your resource bucket can restrict the public access. Allowing upload without authentication is usually a very bad idea.
Yes, in this case your website is not static anymore and you need some backend logic to do so.
If your website clients are authenticated, you may use the AWS API Gateway and Lambda to create this pre-signed url for the clients.

The website hosted on EC2 not able to access S3 image link

I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.

Amazon S3 domain level privacy

I have to use the Amazon S3 to store interactive video files created by Camtasia. Then I have to display these videos on a website with iframe on a specific domain, without turning these files public. It is a feature at Vimeo called domain-level privacy, which means that you can choose what specific website you want to allow your video to be embedded on.
How can I achieve it at the Amazon S3?
I've already reached the point where I can control the access of a file with the Bucket Policy this way:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://myurl.hu/*"
}
}
}
]
}
It would be fine if I call a single file in the iframe but I have to get a .html file which has links in it to other .js files from the same bucket directory. And the I get the 403 codes.
Problem solved.
The iframe redefined the "referer", that's why the amazon blocked the sources within it.

Protecting S3 assets from direct download

I have my assets (images/videos etc) stored in S3 and everything is working great.
The videos however need to be safe from download by the user. I have thought about numerous ways using Ajax and blobs and hiding context menus etc but would prefer a more simple but stronger technique.
The idea I've thought of is to add protection on the S3 bucket so that the assets can only be accessed from the website itself (an Iam role that the EC2 instance has access to).
Just unsure how this works. The bucket is set to static website hosting so everything is public in it, I'm guessing I need to change that then add some direct permissions. Has anyone done this or can anyone provide info on whether this is possible.
Thanks
You can serve video content through Amazon CloudFront, which serves content using video protocols rather than as file downloads. This can keep your content (mostly) safe.
See: On-Demand and Live Streaming Video with CloudFront - Amazon CloudFront
You would then keep the videos private in S3, but use an Origin Access Identity that permits CloudFront to access the content and serve it to users.
In addition to already mentioned Cloudfront, you could also use AWS Elastic Transcoder (https://aws.amazon.com/elastictranscoder/) to convert video files to mpeg-dash or hls format (https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP). These formats basically consists of short (for example 10s long) video parts allowing adaptive bitrate and making it much harder to download as 1 long video.
For CloudFront to work with S3 static website endpoints, AWS generally recommends having a public read permissions on the S3 bucket. There is no native way of achieving security between CloudFront and S3 static website endpoint, however in this case, we can use a workaround to satisfy your use case.
By default, all the S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}

Amazon S3 Permissions by Client code

I've been converting an existing application to an EC2, S3 and RDS model within AWS, so far it's going well but I've got a problem I can't seem to find any info on.
My Web application accesses the S3 box for images and documents, the way this is stored is by client code,
Data/ClientCode1/Images
Data/ClientCode2/Images
Data/ClientABC/Images -- etc
The EC2 hosting the web application also works within a similar structure, so www.programname.com/ClientCode1/Index.aspx as an example, this has working security to prevent cross client access.
Now when www.programname.com/ClientCode1/Index.aspx goes to access the S3 for images, I need to make sure it can only access the ClientCode1 folder on the S3, the goal is to prevent client A seeing the images/documents of client B if you had a tech sort trying.
Is there perhaps a way to get the page referrer, or is there a better approach to this issue?
There is no way to use the URL or referrer to control access to Amazon S3, because that information is presented to your application (not S3).
If all your users are accessing the data in Amazon S3 via the same application, it will be the job of your application to enforce any desired security. This is because the application will be using a single set of credentials to access AWS services, so those credentials will need access to all data that the application might request.
To clarify: Amazon S3 has no idea which page a user is viewing. Only your application knows this. Therefore, your application will need to enforce the security.
I found the solution, seems to work well
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
}
]
}
This allows you to check the referer to see if the URL is from a given path, in my case I have each client sitting in their own path, the bucket follows the same rule, in the above example only a user coming from Client1 can access the bucket data for Client1, if I log in to Client2 and try force an image to the Client1 bucket I'll get access denied.