aws s3 sync include copying website redirect - amazon-web-services

aws s3 sync does not seem to copy the website redirect metadata by default.
There is this option:
--website-redirect (string) If the bucket is configured as a website, redirects requests for this object to another object in the same
bucket or to an external URL. Amazon S3 stores the value of this
header in the object metadata.
But I'm looking for some kind of directive to get sync to copy the redirect of each file to the sync target. Is there any way to do that?

aws s3 cp has the same option. I'm not sure how sync would do this since it is the whole directory, cp only does the single file unless you are using sync with specific files and not the whole dir.
It looks like the redirect is just metadata injected in the file and that is what the --website-redirect is setting.
The following Amazon S3 API actions support the x-amz-website-redirect-location header in the request. Amazon S3 stores the header value in the object metadata as x-amz-website-redirect-location.
https://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html
x-amz-website-redirect-location
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata. For information about object metadata, see Object Key and Metadata.
In the following example, the request header sets the redirect to an object (anotherPage.html) in the same bucket:
x-amz-website-redirect-location: /anotherPage.html
In the following example, the request header sets the object redirect to another website:
x-amz-website-redirect-location: http://www.example.com/
For more information about website hosting in Amazon S3, see Hosting Websites on Amazon S3 and How to Configure Website Page Redirects in the Amazon Simple Storage Service Developer Guide.
Type: String
Default: None
Constraints: The value must be prefixed by, "/", "http://" or "https://". The length of the value is limited to 2 K.
https://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html

Related

AWS S3 AccessDenied on subfolder

I have created S3 bucket, and done the steps to enable static web hosting on it.
I have verified it works by going to the URL
which looks something as following https://my-bucket.s3.aws.com
I want to put my web assets in a sub folder now
I put the web assets in a folder I called foobar
Now if want to access it I have to explictly enter URL as following:
https://my-bucket.s3.aws.com/foobar/index.html
So my question is, do I need to use some other service such as CloudFront to enable so I can go into the bucket with the following URL instead https://my-bucket.s3.aws.com/foobar, that is I don't want to have to explicit say index.html at the end?
You can't do this with a default document for a subfolder using CloudFront. Documentation says
However, if you define a default root object, an end-user request for
a subdirectory of your distribution does not return the default root
object. For example, suppose index.html is your default root object
and that CloudFront receives an end-user request for the install
directory under your CloudFront distribution:
http://d111111abcdef8.cloudfront.net/install/
CloudFront does not return the default root object even if a copy of
index.html appears in the install directory.
But that same page also says
The behavior of CloudFront default root objects is different from the
behavior of Amazon S3 index documents. When you configure an Amazon S3
bucket as a website and specify the index document, Amazon S3 returns
the index document even if a user requests a subdirectory in the
bucket. (A copy of the index document must appear in every
subdirectory.) For more information about configuring Amazon S3
buckets as websites and about index documents, see the Hosting
Websites on Amazon S3 chapter in the Amazon Simple Storage Service
Developer Guide.
So check out out that referenced guide, and the section on Configuring an Index Document in particular.

AWS S3 signed URL meta returned in browser

I have a blogging website and it uses AWS S3 signed URL logic to upload any pictures, used in the blogs, directly from the browser to S3 bucket.
To maintain the security, the request for generating the signed URL goes through the backend which verifies the user authentication and other things, and returns a URL with few configs that must be used to upload the file to S3 bucket from the client application. Here the server returns few metadata to be used in the config. To maintain consistency I used the user's email address as metadata and that will ensure that no random user can upload the file to S3 (though without this too, the security would be maintained but I just added it to add a layer of security).
The problem that I recently found out that (maybe I missed some config) when the file uploaded by particular user abc#example.com is fetched, the response-header includes field:
x-amz-meta-data: {"emailaddress":"abc#example.com"}
Did I miss any configuration in S3 bucket? Or the metadata will be fetched in all the responses?
If yes, how is it a signed URL as all the metadata will be shown in the browser? If no, what configuration am I missing?
If this was expected, how can I transfer all the files to a new bucket with the same policy with modified metadata?
Any help would be appreciated.

S3 upload and access-How to monitor the direct upload of files from client to s3 from server?

I am new to AWS.
I am learning upload a file from client directly to s3.Here are the steps
Client-1 sends request for file upload.
Server generates pre-signed url,keys etc and sends it to client-1.
Client-1 uploads the files directly to s3 using the keys it received from server.
Now how can server (application) know that upload was successful?
How can server access s3 content (Just like it accesses database content-like how many files are there,etc)?
If client-2 wants to access a file uploaded by client-1,how server can
programmatically give access to client-2 ? (Like expiring
tokens,signed URLs.. Like how Facebook uses a long url with access keys for images,etc)
Thank you for answering!
Amazon S3 can be configured to trigger an AWS Lambda function when a new object is created. It will provide the name of the bucket and object. However, there won't be a clear correlation between the client and the upload unless the pre-signed URL enforces a particular filename.
The server can then access the object in Amazon S3 via normal API calls, just like any other object stored in S3.
If client-2 wants to access the object, and the object is private, the server can generate a pre-signed URL to give client-2 access to the object.
See: Amazon S3 pre-signed URLs

Moving files without breaking links in AWS s3

We have a website hosted which uses s3 bucket for storing different files (say policy repo). we want to reorganize the files in a proper folder under S3 bucket without breaking the link to these objects in our website. Is there a way to achieve this in S3?
You could use a Cloudfront distribution in-front of your s3 bucket and use AWS Lambda Edge to re-write the URLs mapping to the new moved folder paths without breaking the old URLs.
From Configuring a Webpage Redirect - Amazon Simple Storage Service:
If your Amazon S3 bucket is configured for website hosting, you can redirect requests for an object to another object in the same bucket or to an external URL.
You set the redirect by adding the x-amz-website-redirect-location property to the object metadata. The website then interprets the object as 301 redirect. To redirect a request to another object, you set the redirect location to the key of the target object.

Redirecting request for non-existent s3 bucket to different bucket

There is a bucket with some world readable content, which is being referenced from many places. We have migrated the contents of the bucket to a new bucket.
Now, we need remove the old bucket, but we cannot remove the endpoints/reference for the objects which were generated in the old bucket.
for example:
Old bucket name: xxx-yyy
Sample endpoint : https://s3.amazonaws.com/xxx-yyy/facebook.png
New bucket name: abc-pqr
Sample endpoint : https://s3.amazonaws.com/abc-pqr/facebook.png
Any request coming to non-existent xxx-yyy bucket should redirect to abc-pqr bucket. We do not want to remove the endpoints, we just want to redirect the request coming to the objects with the endpoint to the new bucket.
It appears that you are referencing files directly in Amazon S3. This format of URL is not able to redirect requests.
Amazon S3 buckets have a capability called Static Website hosting, which gives additional capabilities such as default Index & Error pages, plus the ability to setup a Webpage Redirect.
However, this requires a different URL to access your objects (eg http://xxx-yyy/s3-website-us-west-2.amazonaws.com/facebook.png). Given that you are unable to change your existing links, this would not be an option.
Your only option would be to create web pages in the original S3 bucket that use an HTML redirect to forward browsers to the new location.
With your current setup that's not possible. If you would have used AWS Cloudfront then you could have easily achieved that