S3 Static Website Hosting when Bucketname is taken? - amazon-web-services

I'm trying to host a simple static website from my AWS account with S3. I had an old dusty account with lots of strange settings from testing over the years, I also had an S3 account with a 'mypersonaldomain.com' and 'wwww.mypersonaldomain.com' bucket. Anyways I wanted to start fresh so I canceled the account to start new.
Now when I go to create a 'mypersonaldomain.com' and 'www.mypersonaldomain.com' it says the bucket name is taken even though the account was deleted a while ago. I had assumed that amazon would release the bucketname back to the public. However when I deleted the account, I didn't explicitly delete the buckets beforehand.
I'm under the impression to use S3 for static website hosting the bucket names need to match the domain name for the DNS to work. However If I can't create a bucket with the proper name is there anyway I can use S3 for static hosting? Its just a simple low traffic website that doesn't need to be in an EC2 instance.
FYI I'm using Route 53 for my DNS.
[note 'mypersonldomain.com' is not the actual domain name]

One way to solve your problem would be to store your website on S3 but serve it through CloudFront:
You create a bucket on S3 with whatever name you like (no need to match the bucket's name to your domain name).
You create a distribution on CloudFront.
You add an origin to your distribution pointing to your bucket.
You make the default behavior of your distribution to grab content from the origin created on the previous step.
You change your distribution's settings to make it respond to your domain name, by adding a CNAME
You create a hosted zone on Route 53 and create ALIAS entries pointing to your distribution.
That should work. By doing this, you have the added benefit of potentially improving performance for your end users.
Also note that CloudFront has been included in the Free Tier a couple months ago (http://aws.amazon.com/free).

In my opinion, this is a tremendous design flaw in the way that S3 requires that bucket names are universally unique across all users.
I could, for example, make buckets for any well known company name (assuming they weren't taken) and then the legitimate users of those domains would be blocked from using them for the purpose of creating a static s3 website if they ever wanted to.
I've never liked this requirement that s3 bucket names be unique across all users - a major design flaw (which I am sure had a legitimate reason when it was first designed), but can't imagine that if AWS could go back to the drawing board on this that they wouldn't re-think it now.
In your case, with a delete account, it is probably worth dropping a note to s3 tech support - they may be able to help you out quite easily.

I finally figured out a solution that worked for me. For my apex domain name I am using CloudFront which isn't an issue that someone already has my bucket name. The issue was the www redirect. I needed a server to rewrite the URL and issue a permanent redirect. Since I am using Elastic Beanstalk for my API I leveraged Nginx to accomplish the redirect. You can read all the details on my post here: http://brettmathe.com/aws/aws-apex-redirect-using-elastic-beanstalk/

Related

Amazon S3 - 1 bucket with a folder per sub-domain?

I need to create a service that allows users to publish a static page in a custom subdomain.
I've never done this so excuse me if the question sounds a bit too basic.
To do so, I would like to host all those static files in something like Amazon S3 or Google cloud Storage to separate it from my server, make it scalable and secure it all.
While considering Amazon S3 I noticed a user account is limited to 100 buckets. So I can't just use a bucket per customer.
I guess, I could use 1 bucket for multiple users by just creating folders in it and pointing each folder to a different subdomain?
Does this sound like a proper solution to this problem?
I was reading you can't just point any subdomain to any bucket? Both names should be the same? Wouldn't this be a problem here?
You can do it, one bucket, one folder per website - but you would then use aws cloudfront to serve the data instead of s3 directly - the custom domain would point to cloudfront, and cloudfront would have a different distribution for each website (which would be the matching folder under the single bucket) - not as complicated as it sounds, and is probably the best way to do what you want.
You are correct though, there is a 100 bucket limit (without requesting more), and the bucket name must match the domain name exactly (which can be a problem), but those restrictions don't apply if you use the cloudfront solution I mentioned above.

Use S3 for website in multi-regions

I want to host my website in S3 (because it seems cheaper and i don't have server side script). I have a domain, and i want my domain link to my S3 website. So far, what i do is enabling Static website hosting in my S3 website bucket, and set Route53 record set's Alias Target to my S3 website. it's working. But it's not good enough, i want it to deal with multi regions.
I know that Transfer acceleration can auto sync files to other regions so it's faster for other regions. But i don't know how to make it work with Route53. I hear that some people uses CloudFront to do that but i don't quite understand how. And i don't want to manually create buckets in several regions and manually set up for each region
do you have any suggestion for me?
If your goal is to reduce latency for users worldwide, then Amazon CloudFront is definitely the way to go.
Amazon CloudFront has over 100 edge locations globally, so it has more coverage than merely using AWS regions.
You simply create a CloudFront distribution, point it to your S3 bucket and then point your domain name to CloudFront.
Whenever somebody accesses your content, CloudFront will retrieve it from S3 and cache it in the edge location closest to that user. Then, other users who want the data will receive it from the local cache. Thus, your website appears fast for many users.
See also: Amazon CloudFront pricing

How do I route multiple domain names to the same amazon S3 bucket location?

I have hosted react application on s3 bucket.
I want to access that same application with different domain name.
For Example:
www.example.com, www.demoapp.com, www.reactapp.com
I want to access s3 bucket hosted application with different domain.
How can I do that ?
Is there any other Amazon services do I need to use ?
if you have done something like this then please help me.
Use amazon cloudfront and you should be able to do this: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
You won't be able to do it as a standard s3 static website, but using cloud front will make it possible.
Using Amazon CloudFront Distributions service we can achieve it.
1) Create new CloudFront Distributions service,
2) select the bucket name that you want to use
3) add multiple domain name which we want to add
4) CloudFront Distributions will generate new host something like
abcded.cloudfront.net
5) Add abcded.cloudfront.net that in our original domains CNAME record.
If you are trying to route multiple domain names to the same S3 bucket then you are doing something wrong and you need to re-think your strategy.
domain names are meant to be a unique way of identifying resources on the internet (in this case its your S3 bucket) and there is no point of routing multiple domain names to the same S3 bucket.
If you really want to do this, then there are couple of options:
Option one:
Easy option is to point domain1.com to the S3 bucket and then have redirect rule that redirects requests to domain2.com and domain3.com to domain1.com.
Option two:
Another way is to create 3 S3 buckets and duplicate the content on all 3 and then use different domain names to point to each one. This of course is going to create a nightmare maintenance scenario as if you changed the application in one bucket then you have to do the same change in all other buckets. If you are using GIT to host your application then you can push to all 3 S3 bucket at once.
Again, all of the above is really nothing but a hack to get around the fact that domain names point to unique resources on the internet and I can't really see why would you do something like this.
Hope this gives you some insight on how to resolve this issue.

One domain to mulitple s3 buckets based on geolocation

We want to host images on our application as fast as possible. As we already have an AWS setup we prefer to host our images on S3 buckets (but are open for alternatives).
The challenge is routing the request to the closest S3 bucket.
Right now we use Amazon Route 53 with geolocation routing policy to the closes EC2 instance wich redirects to the respective bucket. We find this inefficent as the request goes:
origin->DNS->EC2->S3 and would prefer
origin->DNS->S3. Is it possible to bind two static website S3 buckets to the same domain where request are routed based on Geolocation?
Ps: We have looked into cloudfront, but since many of the images are dynamic and are only viewed once we would like the origin to be as close to the user as possible.
It's not possible to do this.
In order for an S3 bucket to serve files as a static website, the bucket name must match the domain that is being browsed. Due to this restriction, it's not possible to have more than one bucket serve files for the same domain because you cannot create more than one bucket with the same name, even in different regions.
CloudFront can be used to serve files from S3 buckets, and those S3 buckets don't need to have their names match the domain. So at first glance, this could be a workaround. However, CloudFront does not allow you to create more than one distribution for the same domain.
So unfortunately, as of this writing, geolocating is not possible from S3 buckets.
Edit for a deeper explanation:
Whether the DNS entry for your domain is a CNAME, an A record, or an ALIAS is irrelevant. The limitation is on the S3 side and has nothing to do with DNS.
A CNAME record will resolve example.com to s3.amazonaws.com to x.x.x.x and the connection will be made to S3. But your browser will still send example.com in the Host header.
When S3 serves files for webpages, it uses the Host header in the HTTP request to determine from which bucket the files should be served. This is because there is a single HTTP endpoint for S3. So, just like when your own web server is hosting multiple websites from the same server, it uses the Host header to determine which website you actually want.
Once S3 has the Host that you want, it compares it against the buckets available. It decided that the bucket name would be used to match against the Host header.
So after a lot of research we did not find an answer to the problem. We did however update our setup. The scenario is that a user clicks a button and will view some images in an IOS app. The request when the user pushes the button is geo rerouted to the nearest EC2 instance for faster performance. Instead of returning the same imagelinks in EU and US we updated it so when clicking in US you get links to an American S3 bucket and the same for Europe. We also put up two cloud front distributions, one in front of each S3 bucket, to increase speed.

Integrating Akamai with S3 bucket

I want to serve the contents stored in my S3 bucket with Akamai, not with Amazon CloudFront.
Is there any way to integrate Akamai with S3 bucket?
Its quite huge sathya, but I suggest you to contact solution architect's. If you are configuring for the production systems. Its risks if you are doing it for the first time and things can go wrong. Any how I am writing the steps here, though it will not cover the all the steps.
GO to lunar control center and configure->tools->Edge hostnames-> Create edge hostname.
Make sure you have declared your s3 bucket as a static web site, it becomes easy to access. The name of the s3 bucket should be the name of the domain or sub-domain. Put the end-point of the bucket or your subdomain name and akamai will give you the end point. Copy the end point generated by akamai.
Go to configure->property->site
Choose the configuration name you want to add or create a new configuration from the exiting one, here you should be carefully. This were akamai people can help you to understand set the configurations.
Yes you can integrate your S3 buckets with Akamai. Once you have access to the akamai's lunar control center you can do it. I have done it. Its better to contact Akamai customer support then posting here.