We have a website hosted which uses s3 bucket for storing different files (say policy repo). we want to reorganize the files in a proper folder under S3 bucket without breaking the link to these objects in our website. Is there a way to achieve this in S3?
You could use a Cloudfront distribution in-front of your s3 bucket and use AWS Lambda Edge to re-write the URLs mapping to the new moved folder paths without breaking the old URLs.
From Configuring a Webpage Redirect - Amazon Simple Storage Service:
If your Amazon S3 bucket is configured for website hosting, you can redirect requests for an object to another object in the same bucket or to an external URL.
You set the redirect by adding the x-amz-website-redirect-location property to the object metadata. The website then interprets the object as 301 redirect. To redirect a request to another object, you set the redirect location to the key of the target object.
Related
I have created S3 bucket, and done the steps to enable static web hosting on it.
I have verified it works by going to the URL
which looks something as following https://my-bucket.s3.aws.com
I want to put my web assets in a sub folder now
I put the web assets in a folder I called foobar
Now if want to access it I have to explictly enter URL as following:
https://my-bucket.s3.aws.com/foobar/index.html
So my question is, do I need to use some other service such as CloudFront to enable so I can go into the bucket with the following URL instead https://my-bucket.s3.aws.com/foobar, that is I don't want to have to explicit say index.html at the end?
You can't do this with a default document for a subfolder using CloudFront. Documentation says
However, if you define a default root object, an end-user request for
a subdirectory of your distribution does not return the default root
object. For example, suppose index.html is your default root object
and that CloudFront receives an end-user request for the install
directory under your CloudFront distribution:
http://d111111abcdef8.cloudfront.net/install/
CloudFront does not return the default root object even if a copy of
index.html appears in the install directory.
But that same page also says
The behavior of CloudFront default root objects is different from the
behavior of Amazon S3 index documents. When you configure an Amazon S3
bucket as a website and specify the index document, Amazon S3 returns
the index document even if a user requests a subdirectory in the
bucket. (A copy of the index document must appear in every
subdirectory.) For more information about configuring Amazon S3
buckets as websites and about index documents, see the Hosting
Websites on Amazon S3 chapter in the Amazon Simple Storage Service
Developer Guide.
So check out out that referenced guide, and the section on Configuring an Index Document in particular.
Is it possible to protect data embedded on my website via Amazon CloudFront from hotlinking or other downloads? I am mainly interested in protecting webfonts from being downloaded.
Amazon CloudFront is connected to a S3 Bucket
S3 Bucket Policy controls allowed domains for files via CloudFront
You think that could work?
Since you have cloudfront setup connected to your s3 bucket, you can use Cloudfront Signed Urls to prevent download by anyone from the public.
You can put your fonts in a folder called fonts for example, and setup a separate behaviour in cloudfront for any path that contains /fonts/ and in there you can activate Restrict Viewer Access.
In your website, you will need to add some way to generate the presigned url for this only when your webpage is loaded and you can put a short expiry time for this URL.
I've got a website with custom CMS hosted outside of AWS.
How can I set up CloudFront?
My idea is to upload current content to S3 bucket. Set CloudFront up with origin from that S3.
This also require some changes in CMS itself:
To replace static content (pics) urls for the one from S3bucket.
Check if file exists on S3 then use S3 if not use copy on local server.
To interact with S3 butcket API. (uploading/deleting)
When uploading new file, upload it to S3bucket too
When deleting file, delete it from S3 too
Delete only from the bucket.
Is there any other easier way? Mayby I'm just overthinking and cannot see the simplest solution.
aws s3 sync does not seem to copy the website redirect metadata by default.
There is this option:
--website-redirect (string) If the bucket is configured as a website, redirects requests for this object to another object in the same
bucket or to an external URL. Amazon S3 stores the value of this
header in the object metadata.
But I'm looking for some kind of directive to get sync to copy the redirect of each file to the sync target. Is there any way to do that?
aws s3 cp has the same option. I'm not sure how sync would do this since it is the whole directory, cp only does the single file unless you are using sync with specific files and not the whole dir.
It looks like the redirect is just metadata injected in the file and that is what the --website-redirect is setting.
The following Amazon S3 API actions support the x-amz-website-redirect-location header in the request. Amazon S3 stores the header value in the object metadata as x-amz-website-redirect-location.
https://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html
x-amz-website-redirect-location
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. For information about object metadata, see Object Key and Metadata.
In the following example, the request header sets the redirect to an object (anotherPage.html) in the same bucket:
x-amz-website-redirect-location: /anotherPage.html
In the following example, the request header sets the object redirect to another website:
x-amz-website-redirect-location: http://www.example.com/
For more information about website hosting in Amazon S3, see Hosting Websites on Amazon S3 and How to Configure Website Page Redirects in the Amazon Simple Storage Service Developer Guide.
Type: String
Default: None
Constraints: The value must be prefixed by, "/", "http://" or "https://". The length of the value is limited to 2 K.
https://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html
There is a bucket with some world readable content, which is being referenced from many places. We have migrated the contents of the bucket to a new bucket.
Now, we need remove the old bucket, but we cannot remove the endpoints/reference for the objects which were generated in the old bucket.
for example:
Old bucket name: xxx-yyy
Sample endpoint : https://s3.amazonaws.com/xxx-yyy/facebook.png
New bucket name: abc-pqr
Sample endpoint : https://s3.amazonaws.com/abc-pqr/facebook.png
Any request coming to non-existent xxx-yyy bucket should redirect to abc-pqr bucket. We do not want to remove the endpoints, we just want to redirect the request coming to the objects with the endpoint to the new bucket.
It appears that you are referencing files directly in Amazon S3. This format of URL is not able to redirect requests.
Amazon S3 buckets have a capability called Static Website hosting, which gives additional capabilities such as default Index & Error pages, plus the ability to setup a Webpage Redirect.
However, this requires a different URL to access your objects (eg http://xxx-yyy/s3-website-us-west-2.amazonaws.com/facebook.png). Given that you are unable to change your existing links, this would not be an option.
Your only option would be to create web pages in the original S3 bucket that use an HTML redirect to forward browsers to the new location.
With your current setup that's not possible. If you would have used AWS Cloudfront then you could have easily achieved that